CINXE.COM
Information
<?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:cc="http://web.resource.org/cc/" xmlns:prism="http://prismstandard.org/namespaces/basic/2.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:admin="http://webns.net/mvcb/" xmlns:content="http://purl.org/rss/1.0/modules/content/"> <channel rdf:about="https://www.mdpi.com/rss/journal/information"> <title>Information</title> <description>Latest open access articles published in Information at https://www.mdpi.com/journal/information</description> <link>https://www.mdpi.com/journal/information</link> <admin:generatorAgent rdf:resource="https://www.mdpi.com/journal/information"/> <admin:errorReportsTo rdf:resource="mailto:support@mdpi.com"/> <dc:publisher>MDPI</dc:publisher> <dc:language>en</dc:language> <dc:rights>Creative Commons Attribution (CC-BY)</dc:rights> <prism:copyright>MDPI</prism:copyright> <prism:rightsAgent>support@mdpi.com</prism:rightsAgent> <image rdf:resource="https://pub.mdpi-res.com/img/design/mdpi-pub-logo.png?13cf3b5bd783e021?1732615622"/> <items> <rdf:Seq> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/754" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/753" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/752" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/751" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/750" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/749" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/748" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/747" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/746" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/745" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/743" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/744" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/12/742" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/741" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/740" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/739" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/738" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/737" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/736" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/735" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/734" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/733" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/732" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/731" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/730" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/729" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/728" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/727" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/726" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/725" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/724" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/723" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/722" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/721" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/720" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/719" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/718" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/717" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/716" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/715" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/714" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/713" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/712" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/711" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/710" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/709" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/708" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/706" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/707" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/705" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/704" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/703" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/702" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/701" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/700" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/699" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/698" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/697" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/696" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/694" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/695" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/693" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/692" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/690" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/691" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/689" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/688" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/687" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/686" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/685" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/684" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/683" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/682" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/681" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/680" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/679" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/678" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/677" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/676" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/675" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/674" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/673" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/670" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/671" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/672" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/669" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/668" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/667" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/666" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/665" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/664" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/663" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/662" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/11/661" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/10/660" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/10/659" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/10/658" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/10/657" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/10/655" /> <rdf:li rdf:resource="https://www.mdpi.com/2078-2489/15/10/656" /> </rdf:Seq> </items> <cc:license rdf:resource="https://creativecommons.org/licenses/by/4.0/" /> </channel> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/754"> <title>Information, Vol. 15, Pages 754: Predictive Modeling of Water Level in the San Juan River Using Hybrid Neural Networks Integrated with Kalman Smoothing Methods</title> <link>https://www.mdpi.com/2078-2489/15/12/754</link> <description>This study presents an innovative approach to predicting the water level in the San Juan River, Choc&amp;oacute;, Colombia, by implementing two hybrid models: nonlinear auto-regressive with exogenous inputs (NARX) and long short-term memory (LSTM). These models combine artificial neural networks with smoothing techniques, including the exponential, Savitzky&amp;ndash;Golay, and Rauch&amp;ndash;Tung&amp;ndash;Striebel (RTS) smoothing filters, with the aim of improving the accuracy of hydrological predictions. Given the high rainfall in the region, the San Juan River experiences significant fluctuations in its water levels, which presents a challenge for accurate prediction. The models were trained using historical data, and various smoothing techniques were applied to optimize data quality and reduce noise. The effectiveness of the models was evaluated using standard regression metrics, such as Nash&amp;ndash;Sutcliffe efficiency (NSE), mean square error (MSE), and mean absolute error (MAE), in addition to Kling&amp;ndash;Gupta efficiency (KGE). The results show that the combination of neural networks with smoothing filters, especially the RTS filter and smoothed Kalman filter, provided the most accurate predictions, outperforming traditional methods. This research has important implications for water resource management and flood prevention in vulnerable areas such as Choc&amp;oacute;. The implementation of these hybrid models will allow local authorities to anticipate changes in water levels and plan preventive measures more effectively, thus reducing the risk of damage from extreme events. In summary, this study establishes a solid foundation for future research in water level prediction, highlighting the importance of integrating advanced technologies in water resources management.</description> <pubDate>2024-11-26</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 754: Predictive Modeling of Water Level in the San Juan River Using Hybrid Neural Networks Integrated with Kalman Smoothing Methods</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/754">doi: 10.3390/info15120754</a></p> <p>Authors: Jackson B. Renteria-Mena Eduardo Giraldo </p> <p>This study presents an innovative approach to predicting the water level in the San Juan River, Choc&amp;oacute;, Colombia, by implementing two hybrid models: nonlinear auto-regressive with exogenous inputs (NARX) and long short-term memory (LSTM). These models combine artificial neural networks with smoothing techniques, including the exponential, Savitzky&amp;ndash;Golay, and Rauch&amp;ndash;Tung&amp;ndash;Striebel (RTS) smoothing filters, with the aim of improving the accuracy of hydrological predictions. Given the high rainfall in the region, the San Juan River experiences significant fluctuations in its water levels, which presents a challenge for accurate prediction. The models were trained using historical data, and various smoothing techniques were applied to optimize data quality and reduce noise. The effectiveness of the models was evaluated using standard regression metrics, such as Nash&amp;ndash;Sutcliffe efficiency (NSE), mean square error (MSE), and mean absolute error (MAE), in addition to Kling&amp;ndash;Gupta efficiency (KGE). The results show that the combination of neural networks with smoothing filters, especially the RTS filter and smoothed Kalman filter, provided the most accurate predictions, outperforming traditional methods. This research has important implications for water resource management and flood prevention in vulnerable areas such as Choc&amp;oacute;. The implementation of these hybrid models will allow local authorities to anticipate changes in water levels and plan preventive measures more effectively, thus reducing the risk of damage from extreme events. In summary, this study establishes a solid foundation for future research in water level prediction, highlighting the importance of integrating advanced technologies in water resources management.</p> ]]></content:encoded> <dc:title>Predictive Modeling of Water Level in the San Juan River Using Hybrid Neural Networks Integrated with Kalman Smoothing Methods</dc:title> <dc:creator>Jackson B. Renteria-Mena</dc:creator> <dc:creator>Eduardo Giraldo</dc:creator> <dc:identifier>doi: 10.3390/info15120754</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-26</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-26</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>754</prism:startingPage> <prism:doi>10.3390/info15120754</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/754</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/753"> <title>Information, Vol. 15, Pages 753: Enhancing Stability and Efficiency in Mobile Ad Hoc Networks (MANETs): A Multicriteria Algorithm for Optimal Multipoint Relay Selection</title> <link>https://www.mdpi.com/2078-2489/15/12/753</link> <description>Mobile ad hoc networks (MANETs) are autonomous systems composed of multiple mobile nodes that communicate wirelessly without relying on any pre-established infrastructure. These networks operate in highly dynamic environments, which can compromise their ability to guarantee consistent link lifetimes, security, reliability, and overall stability. Factors such as mobility, energy availability, and security critically influence network performance. Consequently, the selection of paths and relay nodes that ensure stability, security, and extended network lifetimes is fundamental in designing routing protocols for MANETs. This selection is pivotal in maintaining robust network operations and optimizing communication efficiency. This paper introduces a sophisticated algorithm for selecting multipoint relays (MPRs) in MANETs, addressing the challenges posed by node mobility, energy constraints, and security vulnerabilities. By employing a multicriteria-weighted technique that assesses the mobility, energy levels, and trustworthiness of mobile nodes, the proposed approach enhances network stability, reachability, and longevity. The enhanced algorithm is integrated into the Optimized Link State Routing Protocol (OLSR) and validated through NS3 simulations, using the Random Waypoint and ManhattanGrid mobility models. The results indicate superior performance of the enhanced algorithm over traditional OLSR, particularly in terms of packet delivery, delay reduction, and throughput in dynamic network conditions. This study not only advances the design of routing protocols for MANETs but also significantly contributes to the development of robust communication frameworks within the realm of smart mobile communications.</description> <pubDate>2024-11-26</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 753: Enhancing Stability and Efficiency in Mobile Ad Hoc Networks (MANETs): A Multicriteria Algorithm for Optimal Multipoint Relay Selection</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/753">doi: 10.3390/info15120753</a></p> <p>Authors: Ayoub Abdellaoui Yassine Himeur Omar Alnaseri Shadi Atalla Wathiq Mansoor Jamal Elmhamdi Hussain Al-Ahmad </p> <p>Mobile ad hoc networks (MANETs) are autonomous systems composed of multiple mobile nodes that communicate wirelessly without relying on any pre-established infrastructure. These networks operate in highly dynamic environments, which can compromise their ability to guarantee consistent link lifetimes, security, reliability, and overall stability. Factors such as mobility, energy availability, and security critically influence network performance. Consequently, the selection of paths and relay nodes that ensure stability, security, and extended network lifetimes is fundamental in designing routing protocols for MANETs. This selection is pivotal in maintaining robust network operations and optimizing communication efficiency. This paper introduces a sophisticated algorithm for selecting multipoint relays (MPRs) in MANETs, addressing the challenges posed by node mobility, energy constraints, and security vulnerabilities. By employing a multicriteria-weighted technique that assesses the mobility, energy levels, and trustworthiness of mobile nodes, the proposed approach enhances network stability, reachability, and longevity. The enhanced algorithm is integrated into the Optimized Link State Routing Protocol (OLSR) and validated through NS3 simulations, using the Random Waypoint and ManhattanGrid mobility models. The results indicate superior performance of the enhanced algorithm over traditional OLSR, particularly in terms of packet delivery, delay reduction, and throughput in dynamic network conditions. This study not only advances the design of routing protocols for MANETs but also significantly contributes to the development of robust communication frameworks within the realm of smart mobile communications.</p> ]]></content:encoded> <dc:title>Enhancing Stability and Efficiency in Mobile Ad Hoc Networks (MANETs): A Multicriteria Algorithm for Optimal Multipoint Relay Selection</dc:title> <dc:creator>Ayoub Abdellaoui</dc:creator> <dc:creator>Yassine Himeur</dc:creator> <dc:creator>Omar Alnaseri</dc:creator> <dc:creator>Shadi Atalla</dc:creator> <dc:creator>Wathiq Mansoor</dc:creator> <dc:creator>Jamal Elmhamdi</dc:creator> <dc:creator>Hussain Al-Ahmad</dc:creator> <dc:identifier>doi: 10.3390/info15120753</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-26</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-26</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>753</prism:startingPage> <prism:doi>10.3390/info15120753</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/753</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/752"> <title>Information, Vol. 15, Pages 752: A Distributed RF Threat Sensing Architecture</title> <link>https://www.mdpi.com/2078-2489/15/12/752</link> <description>The scope of this work is to propose a distributed RF sensing architecture that interconnects and utilizes a cyber security operations center (SOC) to support long-term RF threat monitoring, alerting, and further centralized processing. For the purpose of this work, RF threats refer mainly to RF jamming, since this can jeopardize multiple wireless systems, either directly as a Denial of Service (DoS) attack, or as a means to force a cellular or WiFi wireless client to connect to a malicious system. Furthermore, the possibility of the suggested architecture to monitor signals from malicious drones in short distances is also examined. The work proposes, develops, and examines the performance of RF sensing sensors that can monitor any frequency band within the range of 1 MHz to 8 GHz, through selective band pass RF filtering, and subsequently these sensors are connected to a remote SOC. The proposed sensors incorporate an automatic calibration and time-depended environment RF profiling algorithm and procedure for optimizing RF jamming detection in a dense RF spectrum, occupied by heterogeneous RF technologies, thus minimizing false-positive alerts. The overall architecture supports TCP/IP interconnections of multiple RF jamming detection sensors through an efficient MQTT protocol, allowing the collaborative operation of sensors that are distributed in different areas of interest, depending on the scenario of interest, offering holistic monitoring by the centralized SOC. The incorporation of the centralized SOC in the overall architecture allows also the centralized application of machine learning algorithms on all the received data.</description> <pubDate>2024-11-26</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 752: A Distributed RF Threat Sensing Architecture</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/752">doi: 10.3390/info15120752</a></p> <p>Authors: Georgios Michalis Andreas Rousias Loizos Kanaris Akis Kokkinis聽 Pantelis Kanaris聽 Stavros Stavrou </p> <p>The scope of this work is to propose a distributed RF sensing architecture that interconnects and utilizes a cyber security operations center (SOC) to support long-term RF threat monitoring, alerting, and further centralized processing. For the purpose of this work, RF threats refer mainly to RF jamming, since this can jeopardize multiple wireless systems, either directly as a Denial of Service (DoS) attack, or as a means to force a cellular or WiFi wireless client to connect to a malicious system. Furthermore, the possibility of the suggested architecture to monitor signals from malicious drones in short distances is also examined. The work proposes, develops, and examines the performance of RF sensing sensors that can monitor any frequency band within the range of 1 MHz to 8 GHz, through selective band pass RF filtering, and subsequently these sensors are connected to a remote SOC. The proposed sensors incorporate an automatic calibration and time-depended environment RF profiling algorithm and procedure for optimizing RF jamming detection in a dense RF spectrum, occupied by heterogeneous RF technologies, thus minimizing false-positive alerts. The overall architecture supports TCP/IP interconnections of multiple RF jamming detection sensors through an efficient MQTT protocol, allowing the collaborative operation of sensors that are distributed in different areas of interest, depending on the scenario of interest, offering holistic monitoring by the centralized SOC. The incorporation of the centralized SOC in the overall architecture allows also the centralized application of machine learning algorithms on all the received data.</p> ]]></content:encoded> <dc:title>A Distributed RF Threat Sensing Architecture</dc:title> <dc:creator>Georgios Michalis</dc:creator> <dc:creator>Andreas Rousias</dc:creator> <dc:creator>Loizos Kanaris</dc:creator> <dc:creator>Akis Kokkinis聽</dc:creator> <dc:creator>Pantelis Kanaris聽</dc:creator> <dc:creator>Stavros Stavrou</dc:creator> <dc:identifier>doi: 10.3390/info15120752</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-26</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-26</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>752</prism:startingPage> <prism:doi>10.3390/info15120752</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/752</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/751"> <title>Information, Vol. 15, Pages 751: Exploring the Impact of Image-Based Audio Representations in Classification Tasks Using Vision Transformers and Explainable AI Techniques</title> <link>https://www.mdpi.com/2078-2489/15/12/751</link> <description>An important hurdle in medical diagnostics is the high-quality and interpretable classification of audio signals. In this study, we present an image-based representation of infant crying audio files to predict abnormal infant cries using a vision transformer and also show significant improvements in the performance and interpretability of this computer-aided tool. The use of advanced feature extraction techniques such as Gammatone Frequency Cepstral Coefficients (GFCCs) resulted in a classification accuracy of 96.33%. For other features (spectrogram and mel-spectrogram), the performance was very similar, with an accuracy of 93.17% for the spectrogram and 94.83% accuracy for the mel-spectrogram. We used our vision transformer (ViT) model, which is less complex but more effective than the proposed audio spectrogram transformer (AST). We incorporated explainable AI (XAI) techniques such as Layer-wise Relevance Propagation (LRP), Local Interpretable Model-agnostic Explanations (LIME), and attention mechanisms to ensure transparency and reliability in decision-making, which helped us understand the why of model predictions. The accuracy of detection was higher than previously reported and the results were easy to interpret, demonstrating that this work can potentially serve as a new benchmark for audio classification tasks, especially in medical diagnostics, and providing better prospects for an imminent future of trustworthy AI-based healthcare solutions.</description> <pubDate>2024-11-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 751: Exploring the Impact of Image-Based Audio Representations in Classification Tasks Using Vision Transformers and Explainable AI Techniques</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/751">doi: 10.3390/info15120751</a></p> <p>Authors: Sari Masri Ahmad Hasasneh Mohammad Tami Chakib Tadj </p> <p>An important hurdle in medical diagnostics is the high-quality and interpretable classification of audio signals. In this study, we present an image-based representation of infant crying audio files to predict abnormal infant cries using a vision transformer and also show significant improvements in the performance and interpretability of this computer-aided tool. The use of advanced feature extraction techniques such as Gammatone Frequency Cepstral Coefficients (GFCCs) resulted in a classification accuracy of 96.33%. For other features (spectrogram and mel-spectrogram), the performance was very similar, with an accuracy of 93.17% for the spectrogram and 94.83% accuracy for the mel-spectrogram. We used our vision transformer (ViT) model, which is less complex but more effective than the proposed audio spectrogram transformer (AST). We incorporated explainable AI (XAI) techniques such as Layer-wise Relevance Propagation (LRP), Local Interpretable Model-agnostic Explanations (LIME), and attention mechanisms to ensure transparency and reliability in decision-making, which helped us understand the why of model predictions. The accuracy of detection was higher than previously reported and the results were easy to interpret, demonstrating that this work can potentially serve as a new benchmark for audio classification tasks, especially in medical diagnostics, and providing better prospects for an imminent future of trustworthy AI-based healthcare solutions.</p> ]]></content:encoded> <dc:title>Exploring the Impact of Image-Based Audio Representations in Classification Tasks Using Vision Transformers and Explainable AI Techniques</dc:title> <dc:creator>Sari Masri</dc:creator> <dc:creator>Ahmad Hasasneh</dc:creator> <dc:creator>Mohammad Tami</dc:creator> <dc:creator>Chakib Tadj</dc:creator> <dc:identifier>doi: 10.3390/info15120751</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>751</prism:startingPage> <prism:doi>10.3390/info15120751</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/751</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/750"> <title>Information, Vol. 15, Pages 750: Multiple Instance Bagging and Risk Histogram for Survival Time Analysis Based on Whole Slide Images of Brain Cancer Patients</title> <link>https://www.mdpi.com/2078-2489/15/12/750</link> <description>This study tackles the challenges in computer-aided prognosis for glioblastoma multiforme, a highly aggressive brain cancer, using only whole slide images (WSIs) as input. Unlike traditional methods that rely on random selection or region-of-interest (ROI) extraction to choose meaningful subsets of patches representing the whole slide, we propose a multiple instance bagging approach. This method utilizes all patches extracted from the whole slide, employing different subsets in each training epoch, thereby leveraging information from the entire slide while keeping the training computationally feasible. Additionally, we developed a two-stage framework based on the ResNet-CBAM model which estimates not just the usual survival risk, but also predicts the actual survival time. Using risk scores of patches estimated from the risk estimation stage, a risk histogram can be constructed and used as input to train a survival time prediction model. A censor hinge loss based on root mean square error was also developed to handle censored data when training the regression model. Tests using the Cancer Genome Atlas Program&amp;rsquo;s glioblastoma public database yielded a concordance index of 73.16&amp;plusmn;2.15%, surpassing existing models. Log-rank testing on predicted high- and low-risk groups using the Kaplan&amp;ndash;Meier method revealed a p-value of 3.88&amp;times;10&amp;minus;9, well below the usual threshold of 0.005, indicating the model&amp;rsquo;s ability to significantly differentiate between the two groups. We also implemented a heatmap visualization method that provides interpretable risk assessments at the patch level, potentially aiding clinicians in identifying high-risk regions within WSIs. Notably, these results were achieved using 98% fewer parameters compared to state-of-the-art models.</description> <pubDate>2024-11-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 750: Multiple Instance Bagging and Risk Histogram for Survival Time Analysis Based on Whole Slide Images of Brain Cancer Patients</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/750">doi: 10.3390/info15120750</a></p> <p>Authors: Yu Ping Chang Ya-Chun Yang Sung-Nien Yu </p> <p>This study tackles the challenges in computer-aided prognosis for glioblastoma multiforme, a highly aggressive brain cancer, using only whole slide images (WSIs) as input. Unlike traditional methods that rely on random selection or region-of-interest (ROI) extraction to choose meaningful subsets of patches representing the whole slide, we propose a multiple instance bagging approach. This method utilizes all patches extracted from the whole slide, employing different subsets in each training epoch, thereby leveraging information from the entire slide while keeping the training computationally feasible. Additionally, we developed a two-stage framework based on the ResNet-CBAM model which estimates not just the usual survival risk, but also predicts the actual survival time. Using risk scores of patches estimated from the risk estimation stage, a risk histogram can be constructed and used as input to train a survival time prediction model. A censor hinge loss based on root mean square error was also developed to handle censored data when training the regression model. Tests using the Cancer Genome Atlas Program&amp;rsquo;s glioblastoma public database yielded a concordance index of 73.16&amp;plusmn;2.15%, surpassing existing models. Log-rank testing on predicted high- and low-risk groups using the Kaplan&amp;ndash;Meier method revealed a p-value of 3.88&amp;times;10&amp;minus;9, well below the usual threshold of 0.005, indicating the model&amp;rsquo;s ability to significantly differentiate between the two groups. We also implemented a heatmap visualization method that provides interpretable risk assessments at the patch level, potentially aiding clinicians in identifying high-risk regions within WSIs. Notably, these results were achieved using 98% fewer parameters compared to state-of-the-art models.</p> ]]></content:encoded> <dc:title>Multiple Instance Bagging and Risk Histogram for Survival Time Analysis Based on Whole Slide Images of Brain Cancer Patients</dc:title> <dc:creator>Yu Ping Chang</dc:creator> <dc:creator>Ya-Chun Yang</dc:creator> <dc:creator>Sung-Nien Yu</dc:creator> <dc:identifier>doi: 10.3390/info15120750</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>750</prism:startingPage> <prism:doi>10.3390/info15120750</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/750</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/749"> <title>Information, Vol. 15, Pages 749: Integrating Digitalization and Asset Health Index for Strategic Life Cycle Cost Analysis of Power Converters</title> <link>https://www.mdpi.com/2078-2489/15/12/749</link> <description>In the context of energy storage systems, optimizing the life cycle of power converters is crucial for reducing costs, making informed decisions, and ensuring sustainability. This study presents a comprehensive methodology for calculating the life cycle cost (LCC) of power converters, employing a nine-step process that integrates digitalization, Internet of Things (IoT) technologies, and the Asset Health Index (AHI). The methodology adapts the Woodward model to provide a detailed cost analysis, encompassing the acquisition, operation, maintenance, and end-of-life phases. Our findings reveal significant insights into asset management, highlighting the importance of preventive and major maintenance in controlling failure rates and extending asset life. This study concludes that adopting sustainable business models and leveraging advanced technologies can enhance the reliability and maintainability of power converters, ultimately leading to more competitive and environmentally friendly energy storage solutions.</description> <pubDate>2024-11-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 749: Integrating Digitalization and Asset Health Index for Strategic Life Cycle Cost Analysis of Power Converters</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/749">doi: 10.3390/info15120749</a></p> <p>Authors: Vicente Gonz谩lez-Prida Antonio de la Fuente Carmona Antonio J. Guill茅n L贸pez Juan F. G贸mez Fern谩ndez Adolfo Crespo M谩rquez </p> <p>In the context of energy storage systems, optimizing the life cycle of power converters is crucial for reducing costs, making informed decisions, and ensuring sustainability. This study presents a comprehensive methodology for calculating the life cycle cost (LCC) of power converters, employing a nine-step process that integrates digitalization, Internet of Things (IoT) technologies, and the Asset Health Index (AHI). The methodology adapts the Woodward model to provide a detailed cost analysis, encompassing the acquisition, operation, maintenance, and end-of-life phases. Our findings reveal significant insights into asset management, highlighting the importance of preventive and major maintenance in controlling failure rates and extending asset life. This study concludes that adopting sustainable business models and leveraging advanced technologies can enhance the reliability and maintainability of power converters, ultimately leading to more competitive and environmentally friendly energy storage solutions.</p> ]]></content:encoded> <dc:title>Integrating Digitalization and Asset Health Index for Strategic Life Cycle Cost Analysis of Power Converters</dc:title> <dc:creator>Vicente Gonz谩lez-Prida</dc:creator> <dc:creator>Antonio de la Fuente Carmona</dc:creator> <dc:creator>Antonio J. Guill茅n L贸pez</dc:creator> <dc:creator>Juan F. G贸mez Fern谩ndez</dc:creator> <dc:creator>Adolfo Crespo M谩rquez</dc:creator> <dc:identifier>doi: 10.3390/info15120749</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>749</prism:startingPage> <prism:doi>10.3390/info15120749</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/749</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/748"> <title>Information, Vol. 15, Pages 748: Test&ndash;Retest Reliability of Deep Learning Analysis of Brain Volumes in Adolescent Brain</title> <link>https://www.mdpi.com/2078-2489/15/12/748</link> <description>Magnetic resonance imaging (MRI) is essential for studying brain development and psychiatric disorders in adolescents. However, the imaging consistency remains challenging, highlighting the need for advanced methodologies to improve the diagnostic and research reliability in this unique developmental period. Adolescence is marked by significant neuroanatomical changes, distinguishing adolescent brains from those of adults and making age-specific imaging research crucial for understanding the neuropsychiatric conditions in youth. This study examines the test&amp;ndash;retest reliability of anatomical brain MRI scans in adolescents diagnosed with depressive disorders, emphasizing a developmental perspective on neuropsychiatric disorders. Using a sample of 42 adolescents, we assessed the consistency of structural imaging metrics across 95 brain regions with deep learning-based neuroimaging analysis pipelines. The results demonstrated moderate to excellent reliability, with the intraclass correlation coefficients (ICC) ranging from 0.57 to 0.99 across regions. Notably, regions such as the pallidum, amygdala, entorhinal cortex, and white matter hypointensities showed moderate reliability, likely reflecting the challenges in the segmentation or inherent anatomical variability unique to this age group. This study highlights the necessity of integrating advanced imaging technologies to enhance the accuracy and reliability of the neuroimaging data specific to adolescents. Addressing the regional variability and strengthening the methodological rigor are essential for advancing the understanding of brain development and psychiatric disorders in this distinct developmental stage. Future research should focus on larger, more diverse samples, multi-site studies, and emerging imaging techniques to further validate the neuroimaging biomarkers. Such advancements could improve the clinical outcomes and deepen our understanding of the neuropsychiatric conditions unique to adolescence.</description> <pubDate>2024-11-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 748: Test&ndash;Retest Reliability of Deep Learning Analysis of Brain Volumes in Adolescent Brain</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/748">doi: 10.3390/info15120748</a></p> <p>Authors: Anna-Maria Kasparbauer Heidrun Lioba Wunram Fabian Abuhsin Friederike K枚rber Eckhard Sch枚nau Stephan Bender Ibrahim Duran </p> <p>Magnetic resonance imaging (MRI) is essential for studying brain development and psychiatric disorders in adolescents. However, the imaging consistency remains challenging, highlighting the need for advanced methodologies to improve the diagnostic and research reliability in this unique developmental period. Adolescence is marked by significant neuroanatomical changes, distinguishing adolescent brains from those of adults and making age-specific imaging research crucial for understanding the neuropsychiatric conditions in youth. This study examines the test&amp;ndash;retest reliability of anatomical brain MRI scans in adolescents diagnosed with depressive disorders, emphasizing a developmental perspective on neuropsychiatric disorders. Using a sample of 42 adolescents, we assessed the consistency of structural imaging metrics across 95 brain regions with deep learning-based neuroimaging analysis pipelines. The results demonstrated moderate to excellent reliability, with the intraclass correlation coefficients (ICC) ranging from 0.57 to 0.99 across regions. Notably, regions such as the pallidum, amygdala, entorhinal cortex, and white matter hypointensities showed moderate reliability, likely reflecting the challenges in the segmentation or inherent anatomical variability unique to this age group. This study highlights the necessity of integrating advanced imaging technologies to enhance the accuracy and reliability of the neuroimaging data specific to adolescents. Addressing the regional variability and strengthening the methodological rigor are essential for advancing the understanding of brain development and psychiatric disorders in this distinct developmental stage. Future research should focus on larger, more diverse samples, multi-site studies, and emerging imaging techniques to further validate the neuroimaging biomarkers. Such advancements could improve the clinical outcomes and deepen our understanding of the neuropsychiatric conditions unique to adolescence.</p> ]]></content:encoded> <dc:title>Test&amp;ndash;Retest Reliability of Deep Learning Analysis of Brain Volumes in Adolescent Brain</dc:title> <dc:creator>Anna-Maria Kasparbauer</dc:creator> <dc:creator>Heidrun Lioba Wunram</dc:creator> <dc:creator>Fabian Abuhsin</dc:creator> <dc:creator>Friederike K枚rber</dc:creator> <dc:creator>Eckhard Sch枚nau</dc:creator> <dc:creator>Stephan Bender</dc:creator> <dc:creator>Ibrahim Duran</dc:creator> <dc:identifier>doi: 10.3390/info15120748</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>748</prism:startingPage> <prism:doi>10.3390/info15120748</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/748</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/747"> <title>Information, Vol. 15, Pages 747: A Hexagon Sensor and A Layer-Based Conversion Method for Hexagon Clusters</title> <link>https://www.mdpi.com/2078-2489/15/12/747</link> <description>In reinforcement learning (RL), precise observations are crucial for agents to learn the optimal policy from their environment. While Unity ML-Agents offers various sensor components for automatically adjusting the observations, it does not support hexagon clusters&amp;mdash;a common feature in strategy games due to their advantageous geometric properties. As a result, users can attempt to utilize the existing sensors to observe hexagon clusters but encounter significant limitations. To address this issue, we propose a hexagon sensor and a layer-based conversion method that enable users to observe hexagon clusters with ease. By organizing the hexagon cells into structured layers, our approach ensures efficient handling of observation and spatial coherence. We provide flexible adaptation to varying observation sizes, which enables the creation of diverse strategic map designs. Our evaluations demonstrate that the hexagon sensor, combined with the layer-based conversion method, achieves a learning speed up to 1.4 times faster and yields up to twice the rewards compared to conventional sensors. Additionally, the inference performance is improved by up to 1.5 times, further validating the effectiveness of our approach.</description> <pubDate>2024-11-22</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 747: A Hexagon Sensor and A Layer-Based Conversion Method for Hexagon Clusters</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/747">doi: 10.3390/info15120747</a></p> <p>Authors: Jun-Ho Kim Hanul Sung </p> <p>In reinforcement learning (RL), precise observations are crucial for agents to learn the optimal policy from their environment. While Unity ML-Agents offers various sensor components for automatically adjusting the observations, it does not support hexagon clusters&amp;mdash;a common feature in strategy games due to their advantageous geometric properties. As a result, users can attempt to utilize the existing sensors to observe hexagon clusters but encounter significant limitations. To address this issue, we propose a hexagon sensor and a layer-based conversion method that enable users to observe hexagon clusters with ease. By organizing the hexagon cells into structured layers, our approach ensures efficient handling of observation and spatial coherence. We provide flexible adaptation to varying observation sizes, which enables the creation of diverse strategic map designs. Our evaluations demonstrate that the hexagon sensor, combined with the layer-based conversion method, achieves a learning speed up to 1.4 times faster and yields up to twice the rewards compared to conventional sensors. Additionally, the inference performance is improved by up to 1.5 times, further validating the effectiveness of our approach.</p> ]]></content:encoded> <dc:title>A Hexagon Sensor and A Layer-Based Conversion Method for Hexagon Clusters</dc:title> <dc:creator>Jun-Ho Kim</dc:creator> <dc:creator>Hanul Sung</dc:creator> <dc:identifier>doi: 10.3390/info15120747</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-22</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-22</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>747</prism:startingPage> <prism:doi>10.3390/info15120747</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/747</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/746"> <title>Information, Vol. 15, Pages 746: A Comprehensive Analysis of Early Alzheimer Disease Detection from 3D sMRI Images Using Deep Learning Frameworks</title> <link>https://www.mdpi.com/2078-2489/15/12/746</link> <description>Accurate diagnosis of Alzheimer&amp;rsquo;s Disease (AD) has largely focused on its later stages, often overlooking the critical need for early detection of Early Mild Cognitive Impairment (EMCI). Early detection is essential for potentially reducing mortality rates; however, distinguishing EMCI from Normal Cognitive (NC) individuals is challenging due to similarities in their brain patterns. To address this, we have developed a subject-level 3D-CNN architecture enhanced by preprocessing techniques to improve classification accuracy between these groups. Our experiments utilized structural Magnetic Resonance Imaging (sMRI) data from the Alzheimer&amp;rsquo;s Disease Neuroimaging Initiative (ADNI) dataset, specifically the ADNI3 collection. We included 446 subjects from the baseline and year 1 phases, comprising 164 individuals diagnosed with EMCI and 282 individuals with NC. When evaluated using 4-fold stratified cross-validation, our model achieved a validation AUC of 91.5%. On the test set, it attained an accuracy of 81.80% along with a recall of 82.50%, precision of 81.80%, and specificity of 80.50%, effectively distinguishing between the NC and EMCI groups. Additionally, a gradient class activation map was employed to highlight key regions influencing model predictions. In comparative evaluations against pretrained models and existing literature, our approach demonstrated decent performance in early AD detection.</description> <pubDate>2024-11-22</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 746: A Comprehensive Analysis of Early Alzheimer Disease Detection from 3D sMRI Images Using Deep Learning Frameworks</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/746">doi: 10.3390/info15120746</a></p> <p>Authors: Pouneh Abbasian Tracy A. Hammond </p> <p>Accurate diagnosis of Alzheimer&amp;rsquo;s Disease (AD) has largely focused on its later stages, often overlooking the critical need for early detection of Early Mild Cognitive Impairment (EMCI). Early detection is essential for potentially reducing mortality rates; however, distinguishing EMCI from Normal Cognitive (NC) individuals is challenging due to similarities in their brain patterns. To address this, we have developed a subject-level 3D-CNN architecture enhanced by preprocessing techniques to improve classification accuracy between these groups. Our experiments utilized structural Magnetic Resonance Imaging (sMRI) data from the Alzheimer&amp;rsquo;s Disease Neuroimaging Initiative (ADNI) dataset, specifically the ADNI3 collection. We included 446 subjects from the baseline and year 1 phases, comprising 164 individuals diagnosed with EMCI and 282 individuals with NC. When evaluated using 4-fold stratified cross-validation, our model achieved a validation AUC of 91.5%. On the test set, it attained an accuracy of 81.80% along with a recall of 82.50%, precision of 81.80%, and specificity of 80.50%, effectively distinguishing between the NC and EMCI groups. Additionally, a gradient class activation map was employed to highlight key regions influencing model predictions. In comparative evaluations against pretrained models and existing literature, our approach demonstrated decent performance in early AD detection.</p> ]]></content:encoded> <dc:title>A Comprehensive Analysis of Early Alzheimer Disease Detection from 3D sMRI Images Using Deep Learning Frameworks</dc:title> <dc:creator>Pouneh Abbasian</dc:creator> <dc:creator>Tracy A. Hammond</dc:creator> <dc:identifier>doi: 10.3390/info15120746</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-22</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-22</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>746</prism:startingPage> <prism:doi>10.3390/info15120746</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/746</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/745"> <title>Information, Vol. 15, Pages 745: Revisioning Healthcare Interoperability System for ABI Architectures: Introspection and Improvements</title> <link>https://www.mdpi.com/2078-2489/15/12/745</link> <description>The integration of systems for Adaptive Business Intelligence (ABI) in the healthcare industry has the potential to revolutionize and reform the way organizations approach data analysis and decision-making. By providing real-time actionable insights and enabling organizations to continuously adapt and evolve, ABI has the potential to drive better outcomes, reduce costs, and improve the overall quality of patient care. The ABI Interoperability System was designed to facilitate the usage and integration of ABI systems in healthcare environments through interoperability resources like Health Level 7 (HL7) or Fast Healthcare Interoperability Resources (FHIR). The present article briefly describes both versions of this software, learning about their differences and improvements, and how they affect the solution. The changes introduced in the new version of the system will tackle code quality with automated tests, development workflow, and developer experience, with the introduction of Continuous Integration and Delivery pipelines in the development workflow, new support for the FHIR pattern, and address a few security concerns about the architecture. The second revision of the system features a more refined, modern, and secure architecture and has proven to be more performant and efficient than its predecessor. As it stands, the Interoperability System poses a significant step forward toward interoperability and ease of integration in the healthcare ecosystem.</description> <pubDate>2024-11-21</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 745: Revisioning Healthcare Interoperability System for ABI Architectures: Introspection and Improvements</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/745">doi: 10.3390/info15120745</a></p> <p>Authors: Jo茫o Guedes J煤lio Duarte Tiago Guimar茫es Manuel Filipe Santos </p> <p>The integration of systems for Adaptive Business Intelligence (ABI) in the healthcare industry has the potential to revolutionize and reform the way organizations approach data analysis and decision-making. By providing real-time actionable insights and enabling organizations to continuously adapt and evolve, ABI has the potential to drive better outcomes, reduce costs, and improve the overall quality of patient care. The ABI Interoperability System was designed to facilitate the usage and integration of ABI systems in healthcare environments through interoperability resources like Health Level 7 (HL7) or Fast Healthcare Interoperability Resources (FHIR). The present article briefly describes both versions of this software, learning about their differences and improvements, and how they affect the solution. The changes introduced in the new version of the system will tackle code quality with automated tests, development workflow, and developer experience, with the introduction of Continuous Integration and Delivery pipelines in the development workflow, new support for the FHIR pattern, and address a few security concerns about the architecture. The second revision of the system features a more refined, modern, and secure architecture and has proven to be more performant and efficient than its predecessor. As it stands, the Interoperability System poses a significant step forward toward interoperability and ease of integration in the healthcare ecosystem.</p> ]]></content:encoded> <dc:title>Revisioning Healthcare Interoperability System for ABI Architectures: Introspection and Improvements</dc:title> <dc:creator>Jo茫o Guedes</dc:creator> <dc:creator>J煤lio Duarte</dc:creator> <dc:creator>Tiago Guimar茫es</dc:creator> <dc:creator>Manuel Filipe Santos</dc:creator> <dc:identifier>doi: 10.3390/info15120745</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-21</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-21</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Review</prism:section> <prism:startingPage>745</prism:startingPage> <prism:doi>10.3390/info15120745</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/745</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/743"> <title>Information, Vol. 15, Pages 743: Implicit-Causality-Exploration-Enabled Graph Neural Network for Stock Prediction</title> <link>https://www.mdpi.com/2078-2489/15/12/743</link> <description>Accurate stock prediction plays an important role in financial markets and can aid investors in making well-informed decisions and optimizing their investment strategies. Relationships exist among stocks in the market, leading to high correlation in their prices. Recently, several methods have been proposed to mine such relationships in order to enhance forecasting results. However, previous works have focused on exploring the correlations among stocks while neglecting the causal characteristics, thereby restricting the predictive performance. Furthermore, due to the diversity of relationships, existing methods are unable to handle both dynamic and static relationships simultaneously. To address the limitations of prior research, we introduce a novel stock trend forecasting framework capable of mining the causal relationships that affect changes in companies&amp;rsquo; stock prices and simultaneously extracts both dynamic and static features to enhance the forecasting performance. Extensive experimental results in the Chinese stock market demonstrate that the proposed framework achieves obvious improvement against multiple state-of-the-art approaches.</description> <pubDate>2024-11-21</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 743: Implicit-Causality-Exploration-Enabled Graph Neural Network for Stock Prediction</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/743">doi: 10.3390/info15120743</a></p> <p>Authors: Ying Li Xiaosha Xue Zhipeng Liu Peibo Duan Bin Zhang </p> <p>Accurate stock prediction plays an important role in financial markets and can aid investors in making well-informed decisions and optimizing their investment strategies. Relationships exist among stocks in the market, leading to high correlation in their prices. Recently, several methods have been proposed to mine such relationships in order to enhance forecasting results. However, previous works have focused on exploring the correlations among stocks while neglecting the causal characteristics, thereby restricting the predictive performance. Furthermore, due to the diversity of relationships, existing methods are unable to handle both dynamic and static relationships simultaneously. To address the limitations of prior research, we introduce a novel stock trend forecasting framework capable of mining the causal relationships that affect changes in companies&amp;rsquo; stock prices and simultaneously extracts both dynamic and static features to enhance the forecasting performance. Extensive experimental results in the Chinese stock market demonstrate that the proposed framework achieves obvious improvement against multiple state-of-the-art approaches.</p> ]]></content:encoded> <dc:title>Implicit-Causality-Exploration-Enabled Graph Neural Network for Stock Prediction</dc:title> <dc:creator>Ying Li</dc:creator> <dc:creator>Xiaosha Xue</dc:creator> <dc:creator>Zhipeng Liu</dc:creator> <dc:creator>Peibo Duan</dc:creator> <dc:creator>Bin Zhang</dc:creator> <dc:identifier>doi: 10.3390/info15120743</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-21</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-21</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>743</prism:startingPage> <prism:doi>10.3390/info15120743</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/743</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/744"> <title>Information, Vol. 15, Pages 744: Fitness Approximation Through Machine Learning with Dynamic Adaptation to the Evolutionary State</title> <link>https://www.mdpi.com/2078-2489/15/12/744</link> <description>We present a novel approach to performing fitness approximation in genetic algorithms (GAs) using machine learning (ML) models, focusing on dynamic adaptation to the evolutionary state. We compare different methods for (1) switching between actual and approximate fitness, (2) sampling the population, and (3) weighting the samples. Experimental findings demonstrate significant improvement in evolutionary runtimes, with fitness scores that are either identical or slightly lower than those of the fully run GA&amp;mdash;depending on the ratio of approximate-to-actual-fitness computation. Although we focus on evolutionary agents in Gymnasium (game) simulators&amp;mdash;where fitness computation is costly&amp;mdash;our approach is generic and can be easily applied to many different domains.</description> <pubDate>2024-11-21</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 744: Fitness Approximation Through Machine Learning with Dynamic Adaptation to the Evolutionary State</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/744">doi: 10.3390/info15120744</a></p> <p>Authors: Itai Tzruia Tomer Halperin Moshe Sipper Achiya Elyasaf </p> <p>We present a novel approach to performing fitness approximation in genetic algorithms (GAs) using machine learning (ML) models, focusing on dynamic adaptation to the evolutionary state. We compare different methods for (1) switching between actual and approximate fitness, (2) sampling the population, and (3) weighting the samples. Experimental findings demonstrate significant improvement in evolutionary runtimes, with fitness scores that are either identical or slightly lower than those of the fully run GA&amp;mdash;depending on the ratio of approximate-to-actual-fitness computation. Although we focus on evolutionary agents in Gymnasium (game) simulators&amp;mdash;where fitness computation is costly&amp;mdash;our approach is generic and can be easily applied to many different domains.</p> ]]></content:encoded> <dc:title>Fitness Approximation Through Machine Learning with Dynamic Adaptation to the Evolutionary State</dc:title> <dc:creator>Itai Tzruia</dc:creator> <dc:creator>Tomer Halperin</dc:creator> <dc:creator>Moshe Sipper</dc:creator> <dc:creator>Achiya Elyasaf</dc:creator> <dc:identifier>doi: 10.3390/info15120744</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-21</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-21</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>744</prism:startingPage> <prism:doi>10.3390/info15120744</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/744</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/12/742"> <title>Information, Vol. 15, Pages 742: Navigating the Disinformation Maze: A Bibliometric Analysis of Scholarly Efforts</title> <link>https://www.mdpi.com/2078-2489/15/12/742</link> <description>The increasing prevalence of disinformation has become a global challenge, exacerbated by the rapid dissemination of information in online environments. The present study conducts a bibliometric analysis of scholarly efforts made over time in the research papers associated with the disinformation field. Thus, this paper aims to understand and help combat disinformation by focusing on methodologies, datasets, and key metadata. Through a bibliometric approach, the study identifies leading authors, affiliations, and journals and examines collaboration networks in the field of disinformation. This analysis highlights the significant growth in research on disinformation, particularly in response to events such as the 2016 U.S. election, Brexit, and the COVID-19 pandemic, with an overall growth rate of 15.14% in the entire analyzed period. The results of the analysis underscore the role of social media and artificial intelligence in the spread of disinformation, as well as the importance of fact-checking technologies. Findings reveal that the most prolific contributions come from universities in the United States of America (USA), the United Kingdom (UK), Spain, and other global institutions, with a notable increase in publications since 2018. Through thematic maps, a keyword analysis, and collaboration networks, this study provides a comprehensive overview of the evolving field of disinformation research, offering valuable insights for future investigations and policy development.</description> <pubDate>2024-11-21</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 742: Navigating the Disinformation Maze: A Bibliometric Analysis of Scholarly Efforts</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/12/742">doi: 10.3390/info15120742</a></p> <p>Authors: George-Cristian T膬taru Adrian Domenteanu Camelia Delcea Margareta Stela Florescu Mihai Orzan Liviu-Adrian Cotfas </p> <p>The increasing prevalence of disinformation has become a global challenge, exacerbated by the rapid dissemination of information in online environments. The present study conducts a bibliometric analysis of scholarly efforts made over time in the research papers associated with the disinformation field. Thus, this paper aims to understand and help combat disinformation by focusing on methodologies, datasets, and key metadata. Through a bibliometric approach, the study identifies leading authors, affiliations, and journals and examines collaboration networks in the field of disinformation. This analysis highlights the significant growth in research on disinformation, particularly in response to events such as the 2016 U.S. election, Brexit, and the COVID-19 pandemic, with an overall growth rate of 15.14% in the entire analyzed period. The results of the analysis underscore the role of social media and artificial intelligence in the spread of disinformation, as well as the importance of fact-checking technologies. Findings reveal that the most prolific contributions come from universities in the United States of America (USA), the United Kingdom (UK), Spain, and other global institutions, with a notable increase in publications since 2018. Through thematic maps, a keyword analysis, and collaboration networks, this study provides a comprehensive overview of the evolving field of disinformation research, offering valuable insights for future investigations and policy development.</p> ]]></content:encoded> <dc:title>Navigating the Disinformation Maze: A Bibliometric Analysis of Scholarly Efforts</dc:title> <dc:creator>George-Cristian T膬taru</dc:creator> <dc:creator>Adrian Domenteanu</dc:creator> <dc:creator>Camelia Delcea</dc:creator> <dc:creator>Margareta Stela Florescu</dc:creator> <dc:creator>Mihai Orzan</dc:creator> <dc:creator>Liviu-Adrian Cotfas</dc:creator> <dc:identifier>doi: 10.3390/info15120742</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-21</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-21</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>12</prism:number> <prism:section>Article</prism:section> <prism:startingPage>742</prism:startingPage> <prism:doi>10.3390/info15120742</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/12/742</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/741"> <title>Information, Vol. 15, Pages 741: Machine Learning-Based Methodologies for Cyber-Attacks and Network Traffic Monitoring: A Review and Insights</title> <link>https://www.mdpi.com/2078-2489/15/11/741</link> <description>The number of connected IoT devices is increasing significantly due to their many benefits, including automation, improved efficiency and quality of life, and reducing waste. However, these devices have several vulnerabilities that have led to the rapid growth in the number of attacks. Therefore, several machine learning-based intrusion detection system (IDS) tools have been developed to detect intrusions and suspicious activity to and from a host (HIDS&amp;mdash;Host IDS) or, in general, within the traffic of a network (NIDS&amp;mdash;Network IDS). The proposed work performs a comparative analysis and an ablative study among recent machine learning-based NIDSs to develop a benchmark of the different proposed strategies. The proposed work compares both shallow learning algorithms, such as decision trees, random forests, Na&amp;iuml;ve Bayes, logistic regression, XGBoost, and support vector machines, and deep learning algorithms, such as DNNs, CNNs, and LSTM, whose approach is relatively new in the literature. Also, the ensembles are tested. The algorithms are evaluated on the KDD-99, NSL-KDD, UNSW-NB15, IoT-23, and UNB-CIC IoT 2023 datasets. The results show that the NIDS tools based on deep learning approaches achieve better performance in detecting network anomalies than shallow learning approaches, and ensembles outperform all the other models.</description> <pubDate>2024-11-20</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 741: Machine Learning-Based Methodologies for Cyber-Attacks and Network Traffic Monitoring: A Review and Insights</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/741">doi: 10.3390/info15110741</a></p> <p>Authors: Filippo Genuario Giuseppe Santoro Michele Giliberti Stefania Bello Elvira Zazzera Donato Impedovo </p> <p>The number of connected IoT devices is increasing significantly due to their many benefits, including automation, improved efficiency and quality of life, and reducing waste. However, these devices have several vulnerabilities that have led to the rapid growth in the number of attacks. Therefore, several machine learning-based intrusion detection system (IDS) tools have been developed to detect intrusions and suspicious activity to and from a host (HIDS&amp;mdash;Host IDS) or, in general, within the traffic of a network (NIDS&amp;mdash;Network IDS). The proposed work performs a comparative analysis and an ablative study among recent machine learning-based NIDSs to develop a benchmark of the different proposed strategies. The proposed work compares both shallow learning algorithms, such as decision trees, random forests, Na&amp;iuml;ve Bayes, logistic regression, XGBoost, and support vector machines, and deep learning algorithms, such as DNNs, CNNs, and LSTM, whose approach is relatively new in the literature. Also, the ensembles are tested. The algorithms are evaluated on the KDD-99, NSL-KDD, UNSW-NB15, IoT-23, and UNB-CIC IoT 2023 datasets. The results show that the NIDS tools based on deep learning approaches achieve better performance in detecting network anomalies than shallow learning approaches, and ensembles outperform all the other models.</p> ]]></content:encoded> <dc:title>Machine Learning-Based Methodologies for Cyber-Attacks and Network Traffic Monitoring: A Review and Insights</dc:title> <dc:creator>Filippo Genuario</dc:creator> <dc:creator>Giuseppe Santoro</dc:creator> <dc:creator>Michele Giliberti</dc:creator> <dc:creator>Stefania Bello</dc:creator> <dc:creator>Elvira Zazzera</dc:creator> <dc:creator>Donato Impedovo</dc:creator> <dc:identifier>doi: 10.3390/info15110741</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-20</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-20</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>741</prism:startingPage> <prism:doi>10.3390/info15110741</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/741</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/740"> <title>Information, Vol. 15, Pages 740: Detecting Adversarial Attacks in IoT-Enabled Predictive Maintenance with Time-Series Data Augmentation</title> <link>https://www.mdpi.com/2078-2489/15/11/740</link> <description>Despite considerable advancements in integrating the Internet of Things (IoT) and artificial intelligence (AI) within the industrial maintenance framework, the increasing reliance on these innovative technologies introduces significant vulnerabilities due to cybersecurity risks, potentially compromising the integrity of decision-making processes. Accordingly, this study aims to offer comprehensive insights into the cybersecurity challenges associated with predictive maintenance, proposing a novel methodology that leverages generative AI for data augmentation, enhancing threat detection capabilities. Experimental evaluations conducted using the NASA Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS) dataset affirm the viability of this approach leveraging the state-of-the-art TimeGAN model for temporal-aware data generation and building a recurrent classifier for attack discrimination in a balanced dataset. The classifier&amp;rsquo;s results demonstrate the satisfactory and robust performance achieved in terms of accuracy (between 80% and 90%) and how the strategic generation of data can effectively bolster the resilience of intelligent maintenance systems against cyber threats.</description> <pubDate>2024-11-20</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 740: Detecting Adversarial Attacks in IoT-Enabled Predictive Maintenance with Time-Series Data Augmentation</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/740">doi: 10.3390/info15110740</a></p> <p>Authors: Flora Amato Egidia Cirillo Mattia Fonisto Alberto Moccardi </p> <p>Despite considerable advancements in integrating the Internet of Things (IoT) and artificial intelligence (AI) within the industrial maintenance framework, the increasing reliance on these innovative technologies introduces significant vulnerabilities due to cybersecurity risks, potentially compromising the integrity of decision-making processes. Accordingly, this study aims to offer comprehensive insights into the cybersecurity challenges associated with predictive maintenance, proposing a novel methodology that leverages generative AI for data augmentation, enhancing threat detection capabilities. Experimental evaluations conducted using the NASA Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS) dataset affirm the viability of this approach leveraging the state-of-the-art TimeGAN model for temporal-aware data generation and building a recurrent classifier for attack discrimination in a balanced dataset. The classifier&amp;rsquo;s results demonstrate the satisfactory and robust performance achieved in terms of accuracy (between 80% and 90%) and how the strategic generation of data can effectively bolster the resilience of intelligent maintenance systems against cyber threats.</p> ]]></content:encoded> <dc:title>Detecting Adversarial Attacks in IoT-Enabled Predictive Maintenance with Time-Series Data Augmentation</dc:title> <dc:creator>Flora Amato</dc:creator> <dc:creator>Egidia Cirillo</dc:creator> <dc:creator>Mattia Fonisto</dc:creator> <dc:creator>Alberto Moccardi</dc:creator> <dc:identifier>doi: 10.3390/info15110740</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-20</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-20</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>740</prism:startingPage> <prism:doi>10.3390/info15110740</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/740</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/739"> <title>Information, Vol. 15, Pages 739: PLC-Fusion: Perspective-Based Hierarchical and Deep LiDAR Camera Fusion for 3D Object Detection in Autonomous Vehicles</title> <link>https://www.mdpi.com/2078-2489/15/11/739</link> <description>Accurate 3D object detection is essential for autonomous driving, yet traditional LiDAR models often struggle with sparse point clouds. We propose perspective-aware hierarchical vision transformer-based LiDAR-camera fusion (PLC-Fusion) for 3D object detection to address this. This efficient, multi-modal 3D object detection framework integrates LiDAR and camera data for improved performance. First, our method enhances LiDAR data by projecting them onto a 2D plane, enabling the extraction of object perspective features from a probability map via the Object Perspective Sampling (OPS) module. It incorporates a lightweight perspective detector, consisting of interconnected 2D and monocular 3D sub-networks, to extract image features and generate object perspective proposals by predicting and refining top-scored 3D candidates. Second, it leverages two independent transformers&amp;mdash;CamViT for 2D image features and LidViT for 3D point cloud features. These ViT-based representations are fused via the Cross-Fusion module for hierarchical and deep representation learning, improving performance and computational efficiency. These mechanisms enhance the utilization of semantic features in a region of interest (ROI) to obtain more representative point features, leading to a more effective fusion of information from both LiDAR and camera sources. PLC-Fusion outperforms existing methods, achieving a mean average precision (mAP) of 83.52% and 90.37% for 3D and BEV detection, respectively. Moreover, PLC-Fusion maintains a competitive inference time of 0.18 s. Our model addresses computational bottlenecks by eliminating the need for dense BEV searches and global attention mechanisms while improving detection range and precision.</description> <pubDate>2024-11-19</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 739: PLC-Fusion: Perspective-Based Hierarchical and Deep LiDAR Camera Fusion for 3D Object Detection in Autonomous Vehicles</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/739">doi: 10.3390/info15110739</a></p> <p>Authors: Husnain Mushtaq Xiaoheng Deng Fizza Azhar Mubashir Ali Hafiz Husnain Raza Sherazi </p> <p>Accurate 3D object detection is essential for autonomous driving, yet traditional LiDAR models often struggle with sparse point clouds. We propose perspective-aware hierarchical vision transformer-based LiDAR-camera fusion (PLC-Fusion) for 3D object detection to address this. This efficient, multi-modal 3D object detection framework integrates LiDAR and camera data for improved performance. First, our method enhances LiDAR data by projecting them onto a 2D plane, enabling the extraction of object perspective features from a probability map via the Object Perspective Sampling (OPS) module. It incorporates a lightweight perspective detector, consisting of interconnected 2D and monocular 3D sub-networks, to extract image features and generate object perspective proposals by predicting and refining top-scored 3D candidates. Second, it leverages two independent transformers&amp;mdash;CamViT for 2D image features and LidViT for 3D point cloud features. These ViT-based representations are fused via the Cross-Fusion module for hierarchical and deep representation learning, improving performance and computational efficiency. These mechanisms enhance the utilization of semantic features in a region of interest (ROI) to obtain more representative point features, leading to a more effective fusion of information from both LiDAR and camera sources. PLC-Fusion outperforms existing methods, achieving a mean average precision (mAP) of 83.52% and 90.37% for 3D and BEV detection, respectively. Moreover, PLC-Fusion maintains a competitive inference time of 0.18 s. Our model addresses computational bottlenecks by eliminating the need for dense BEV searches and global attention mechanisms while improving detection range and precision.</p> ]]></content:encoded> <dc:title>PLC-Fusion: Perspective-Based Hierarchical and Deep LiDAR Camera Fusion for 3D Object Detection in Autonomous Vehicles</dc:title> <dc:creator>Husnain Mushtaq</dc:creator> <dc:creator>Xiaoheng Deng</dc:creator> <dc:creator>Fizza Azhar</dc:creator> <dc:creator>Mubashir Ali</dc:creator> <dc:creator>Hafiz Husnain Raza Sherazi</dc:creator> <dc:identifier>doi: 10.3390/info15110739</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-19</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-19</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>739</prism:startingPage> <prism:doi>10.3390/info15110739</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/739</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/738"> <title>Information, Vol. 15, Pages 738: SoK: The Impact of Educational Data Mining on Organisational Administration</title> <link>https://www.mdpi.com/2078-2489/15/11/738</link> <description>Educational Data Mining (EDM) applies advanced data mining techniques to analyse data from educational settings, traditionally aimed at improving student performance. However, EDM&amp;rsquo;s potential extends to enhancing administrative functions in educational organisations. This systematisation of knowledge (SoK) explores the use of EDM in organisational administration, examining peer-reviewed and non-peer-reviewed studies to provide a comprehensive understanding of its impact. This review highlights how EDM can revolutionise decision-making processes, supporting data-driven strategies that enhance administrative efficiency. It outlines key data mining techniques used in tasks like resource allocation, staff evaluation, and institutional planning. Challenges related to EDM implementation, such as data privacy, system integration, and the need for specialised skills, are also discussed. While EDM offers benefits like increased efficiency and informed decision-making, this review notes potential risks, including over-reliance on data and misinterpretation. The role of EDM in developing robust administrative frameworks that align with organisational goals is also explored. This study provides a critical overview of the existing literature and identifies areas for future research, offering insights to optimise educational administration through effective EDM use and highlighting its growing significance in shaping the future of educational organisations.</description> <pubDate>2024-11-19</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 738: SoK: The Impact of Educational Data Mining on Organisational Administration</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/738">doi: 10.3390/info15110738</a></p> <p>Authors: Hamad Almaghrabi Ben Soh Alice Li Idrees Alsolbi </p> <p>Educational Data Mining (EDM) applies advanced data mining techniques to analyse data from educational settings, traditionally aimed at improving student performance. However, EDM&amp;rsquo;s potential extends to enhancing administrative functions in educational organisations. This systematisation of knowledge (SoK) explores the use of EDM in organisational administration, examining peer-reviewed and non-peer-reviewed studies to provide a comprehensive understanding of its impact. This review highlights how EDM can revolutionise decision-making processes, supporting data-driven strategies that enhance administrative efficiency. It outlines key data mining techniques used in tasks like resource allocation, staff evaluation, and institutional planning. Challenges related to EDM implementation, such as data privacy, system integration, and the need for specialised skills, are also discussed. While EDM offers benefits like increased efficiency and informed decision-making, this review notes potential risks, including over-reliance on data and misinterpretation. The role of EDM in developing robust administrative frameworks that align with organisational goals is also explored. This study provides a critical overview of the existing literature and identifies areas for future research, offering insights to optimise educational administration through effective EDM use and highlighting its growing significance in shaping the future of educational organisations.</p> ]]></content:encoded> <dc:title>SoK: The Impact of Educational Data Mining on Organisational Administration</dc:title> <dc:creator>Hamad Almaghrabi</dc:creator> <dc:creator>Ben Soh</dc:creator> <dc:creator>Alice Li</dc:creator> <dc:creator>Idrees Alsolbi</dc:creator> <dc:identifier>doi: 10.3390/info15110738</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-19</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-19</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>738</prism:startingPage> <prism:doi>10.3390/info15110738</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/738</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/737"> <title>Information, Vol. 15, Pages 737: Fractional Intuitionistic Fuzzy Support Vector Machine: Diabetes Tweet Classification</title> <link>https://www.mdpi.com/2078-2489/15/11/737</link> <description>Support vector machine (SVM) models apply the Karush&amp;ndash;Kuhn&amp;ndash;Tucker (KKT-OC) optimality conditions in the ordinary derivative to the primal optimisation problem, which has a major influence on the weights associated with the dissimilarity between the selected support vectors and subsequently on the quality of the model&amp;rsquo;s predictions. Recognising the capacity of fractional derivatives to provide machine learning models with more memory through more microscopic differentiations, in this paper we generalise KKT-OC based on ordinary derivatives to KKT-OC using fractional derivatives (Frac-KKT-OC). To mitigate the impact of noise and identify support vectors from noise, we apply the Frac-KKT-OC method to the fuzzy intuitionistic version of SVM (IFSVM). The fractional fuzzy intuitionistic SVM model (Frac-IFSVM) is then evaluated on six sets of data from the UCI and used to predict the sentiments embedded in tweets posted by people with diabetes. Taking into account four performance measures (sensitivity, specificity, F-measure, and G-mean), the Frac-IFSVM version outperforms SVM, FSVM, IFSVM, Frac-SVM, and Frac-FSVM.</description> <pubDate>2024-11-19</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 737: Fractional Intuitionistic Fuzzy Support Vector Machine: Diabetes Tweet Classification</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/737">doi: 10.3390/info15110737</a></p> <p>Authors: Hassan Badi Alina-Mihaela Patriciu Karim El Moutaouakil </p> <p>Support vector machine (SVM) models apply the Karush&amp;ndash;Kuhn&amp;ndash;Tucker (KKT-OC) optimality conditions in the ordinary derivative to the primal optimisation problem, which has a major influence on the weights associated with the dissimilarity between the selected support vectors and subsequently on the quality of the model&amp;rsquo;s predictions. Recognising the capacity of fractional derivatives to provide machine learning models with more memory through more microscopic differentiations, in this paper we generalise KKT-OC based on ordinary derivatives to KKT-OC using fractional derivatives (Frac-KKT-OC). To mitigate the impact of noise and identify support vectors from noise, we apply the Frac-KKT-OC method to the fuzzy intuitionistic version of SVM (IFSVM). The fractional fuzzy intuitionistic SVM model (Frac-IFSVM) is then evaluated on six sets of data from the UCI and used to predict the sentiments embedded in tweets posted by people with diabetes. Taking into account four performance measures (sensitivity, specificity, F-measure, and G-mean), the Frac-IFSVM version outperforms SVM, FSVM, IFSVM, Frac-SVM, and Frac-FSVM.</p> ]]></content:encoded> <dc:title>Fractional Intuitionistic Fuzzy Support Vector Machine: Diabetes Tweet Classification</dc:title> <dc:creator>Hassan Badi</dc:creator> <dc:creator>Alina-Mihaela Patriciu</dc:creator> <dc:creator>Karim El Moutaouakil</dc:creator> <dc:identifier>doi: 10.3390/info15110737</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-19</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-19</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>737</prism:startingPage> <prism:doi>10.3390/info15110737</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/737</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/736"> <title>Information, Vol. 15, Pages 736: Benchmarking for a New Railway Accident Classification Methodology and Its Database: A Case Study in Mexico, the United States, Canada, and the European Union</title> <link>https://www.mdpi.com/2078-2489/15/11/736</link> <description>Rail accidents have decreased in recent years, although not significantly if measured by train accidents recorded in the last six years. Therefore, it is essential to identify weaknesses in the implementation of security and prevention systems. This research aims to study the trend and classification of railway accidents, as well as analyze public databases. Using the business management method of benchmarking, descriptive statistics, and a novel approach to the Ishikawa diagram, this study demonstrates best practices and strategies to reduce accidents. Unlike previous studies, this research specifically examines public databases and provides a framework for developing the standardization of railway accident causes and recommendations. The main conclusion is that the proposed classification of railway accident causes, and its associated database, ensures that agencies, researchers, and the government have accessible, easily linkable, and usable data references to enhance their analysis and support the continued reduction of accidents.</description> <pubDate>2024-11-18</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 736: Benchmarking for a New Railway Accident Classification Methodology and Its Database: A Case Study in Mexico, the United States, Canada, and the European Union</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/736">doi: 10.3390/info15110736</a></p> <p>Authors: Tania Elizabeth Sandoval-Valencia Adriana del Carmen T茅llez-Anguiano Dante Ruiz-Robles Ivon Alanis-Fuerte Alexis Vaed V谩zquez-Esquivel Juan C. J谩uregui-Correa </p> <p>Rail accidents have decreased in recent years, although not significantly if measured by train accidents recorded in the last six years. Therefore, it is essential to identify weaknesses in the implementation of security and prevention systems. This research aims to study the trend and classification of railway accidents, as well as analyze public databases. Using the business management method of benchmarking, descriptive statistics, and a novel approach to the Ishikawa diagram, this study demonstrates best practices and strategies to reduce accidents. Unlike previous studies, this research specifically examines public databases and provides a framework for developing the standardization of railway accident causes and recommendations. The main conclusion is that the proposed classification of railway accident causes, and its associated database, ensures that agencies, researchers, and the government have accessible, easily linkable, and usable data references to enhance their analysis and support the continued reduction of accidents.</p> ]]></content:encoded> <dc:title>Benchmarking for a New Railway Accident Classification Methodology and Its Database: A Case Study in Mexico, the United States, Canada, and the European Union</dc:title> <dc:creator>Tania Elizabeth Sandoval-Valencia</dc:creator> <dc:creator>Adriana del Carmen T茅llez-Anguiano</dc:creator> <dc:creator>Dante Ruiz-Robles</dc:creator> <dc:creator>Ivon Alanis-Fuerte</dc:creator> <dc:creator>Alexis Vaed V谩zquez-Esquivel</dc:creator> <dc:creator>Juan C. J谩uregui-Correa</dc:creator> <dc:identifier>doi: 10.3390/info15110736</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-18</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-18</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>736</prism:startingPage> <prism:doi>10.3390/info15110736</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/736</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/735"> <title>Information, Vol. 15, Pages 735: LGFA-MTKD: Enhancing Multi-Teacher Knowledge Distillation with Local and Global Frequency Attention</title> <link>https://www.mdpi.com/2078-2489/15/11/735</link> <description>Transferring the extensive and varied knowledge contained within multiple complex models into a more compact student model poses significant challenges in multi-teacher knowledge distillation. Traditional distillation approaches often fall short in this context, as they struggle to fully capture and integrate the wide range of valuable information from each teacher. The variation in the knowledge offered by various teacher models complicates the student model&amp;rsquo;s ability to learn effectively and generalize well, ultimately resulting in subpar results. To overcome these constraints, We introduce an innovative method that integrates both localized and globalized frequency attention techniques, aiming to substantially enhance the distillation process. By simultaneously focusing on fine-grained local details and broad global patterns, our approach allows the student model to more effectively grasp the complex and diverse information provided by each teacher, therefore enhancing its learning capability. This dual-attention mechanism allows for a more balanced assimilation of specific details and generalized concepts, resulting in a more robust and accurate student model. Extensive experimental evaluations on standard benchmarks demonstrate that our methodology reliably exceeds the performance of current multi-teacher distillation methods, yielding outstanding outcomes regarding both performance and robustness. Specifically, our approach achieves an average performance improvement of 0.55% over CA-MKD, with a 1.05% gain in optimal conditions. These findings suggest that frequency-based attention mechanisms can unlock new potential in knowledge distillation, model compression, and transfer learning.</description> <pubDate>2024-11-18</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 735: LGFA-MTKD: Enhancing Multi-Teacher Knowledge Distillation with Local and Global Frequency Attention</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/735">doi: 10.3390/info15110735</a></p> <p>Authors: Xin Cheng Jinjia Zhou </p> <p>Transferring the extensive and varied knowledge contained within multiple complex models into a more compact student model poses significant challenges in multi-teacher knowledge distillation. Traditional distillation approaches often fall short in this context, as they struggle to fully capture and integrate the wide range of valuable information from each teacher. The variation in the knowledge offered by various teacher models complicates the student model&amp;rsquo;s ability to learn effectively and generalize well, ultimately resulting in subpar results. To overcome these constraints, We introduce an innovative method that integrates both localized and globalized frequency attention techniques, aiming to substantially enhance the distillation process. By simultaneously focusing on fine-grained local details and broad global patterns, our approach allows the student model to more effectively grasp the complex and diverse information provided by each teacher, therefore enhancing its learning capability. This dual-attention mechanism allows for a more balanced assimilation of specific details and generalized concepts, resulting in a more robust and accurate student model. Extensive experimental evaluations on standard benchmarks demonstrate that our methodology reliably exceeds the performance of current multi-teacher distillation methods, yielding outstanding outcomes regarding both performance and robustness. Specifically, our approach achieves an average performance improvement of 0.55% over CA-MKD, with a 1.05% gain in optimal conditions. These findings suggest that frequency-based attention mechanisms can unlock new potential in knowledge distillation, model compression, and transfer learning.</p> ]]></content:encoded> <dc:title>LGFA-MTKD: Enhancing Multi-Teacher Knowledge Distillation with Local and Global Frequency Attention</dc:title> <dc:creator>Xin Cheng</dc:creator> <dc:creator>Jinjia Zhou</dc:creator> <dc:identifier>doi: 10.3390/info15110735</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-18</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-18</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>735</prism:startingPage> <prism:doi>10.3390/info15110735</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/735</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/734"> <title>Information, Vol. 15, Pages 734: Zero Trust VPN (ZT-VPN): A Systematic Literature Review and Cybersecurity Framework for Hybrid and Remote Work</title> <link>https://www.mdpi.com/2078-2489/15/11/734</link> <description>Modern organizations have migrated from localized physical offices to work-from-home environments. This surge in remote work culture has exponentially increased the demand for and usage of Virtual Private Networks (VPNs), which permit remote employees to access corporate offices effectively. However, the technology raises concerns, including security threats, latency, throughput, and scalability, among others. These newer-generation threats are more complex and frequent, which makes the legacy approach to security ineffective. This research paper gives an overview of contemporary technologies used across enterprises, including the VPNs, Zero Trust Network Access (ZTNA), proxy servers, Secure Shell (SSH) tunnels, the software-defined wide area network (SD-WAN), and Secure Access Service Edge (SASE). This paper also presents a comprehensive cybersecurity framework named Zero Trust VPN (ZT-VPN), which is a VPN solution based on Zero Trust principles. The proposed framework aims to enhance IT security and privacy for modern enterprises in remote work environments and address concerns of latency, throughput, scalability, and security. Finally, this paper demonstrates the effectiveness of the proposed framework in various enterprise scenarios, highlighting its ability to prevent data leaks, manage access permissions, and provide seamless security transitions. The findings underscore the importance of adopting ZT-VPN to fortify cybersecurity frameworks, offering an effective protection tool against contemporary cyber threats. This research serves as a valuable reference for organizations aiming to enhance their security posture in an increasingly hostile threat landscape.</description> <pubDate>2024-11-17</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 734: Zero Trust VPN (ZT-VPN): A Systematic Literature Review and Cybersecurity Framework for Hybrid and Remote Work</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/734">doi: 10.3390/info15110734</a></p> <p>Authors: Syed Muhammad Zohaib Syed Muhammad Sajjad Zafar Iqbal Muhammad Yousaf Muhammad Haseeb Zia Muhammad </p> <p>Modern organizations have migrated from localized physical offices to work-from-home environments. This surge in remote work culture has exponentially increased the demand for and usage of Virtual Private Networks (VPNs), which permit remote employees to access corporate offices effectively. However, the technology raises concerns, including security threats, latency, throughput, and scalability, among others. These newer-generation threats are more complex and frequent, which makes the legacy approach to security ineffective. This research paper gives an overview of contemporary technologies used across enterprises, including the VPNs, Zero Trust Network Access (ZTNA), proxy servers, Secure Shell (SSH) tunnels, the software-defined wide area network (SD-WAN), and Secure Access Service Edge (SASE). This paper also presents a comprehensive cybersecurity framework named Zero Trust VPN (ZT-VPN), which is a VPN solution based on Zero Trust principles. The proposed framework aims to enhance IT security and privacy for modern enterprises in remote work environments and address concerns of latency, throughput, scalability, and security. Finally, this paper demonstrates the effectiveness of the proposed framework in various enterprise scenarios, highlighting its ability to prevent data leaks, manage access permissions, and provide seamless security transitions. The findings underscore the importance of adopting ZT-VPN to fortify cybersecurity frameworks, offering an effective protection tool against contemporary cyber threats. This research serves as a valuable reference for organizations aiming to enhance their security posture in an increasingly hostile threat landscape.</p> ]]></content:encoded> <dc:title>Zero Trust VPN (ZT-VPN): A Systematic Literature Review and Cybersecurity Framework for Hybrid and Remote Work</dc:title> <dc:creator>Syed Muhammad Zohaib</dc:creator> <dc:creator>Syed Muhammad Sajjad</dc:creator> <dc:creator>Zafar Iqbal</dc:creator> <dc:creator>Muhammad Yousaf</dc:creator> <dc:creator>Muhammad Haseeb</dc:creator> <dc:creator>Zia Muhammad</dc:creator> <dc:identifier>doi: 10.3390/info15110734</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-17</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-17</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>734</prism:startingPage> <prism:doi>10.3390/info15110734</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/734</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/733"> <title>Information, Vol. 15, Pages 733: Reduced-Order Model of Coal Seam Gas Extraction Pressure Distribution Based on Deep Neural Networks and Convolutional Autoencoders</title> <link>https://www.mdpi.com/2078-2489/15/11/733</link> <description>There has been extensive research on the partial differential equations governing the theory of gas flow in coal mines. However, the traditional Proper Orthogonal Decomposition&amp;ndash;Radial Basis Function (POD-RBF) reduced-order algorithm requires significant computational resources and is inefficient when calculating high-dimensional data for coal mine gas pressure fields. To achieve the rapid computation of gas extraction pressure fields, this paper proposes a model reduction method based on deep neural networks (DNNs) and convolutional autoencoders (CAEs). The CAE is used to compress and reconstruct full-order numerical solutions for coal mine gas extraction, while the DNN is employed to establish the nonlinear mapping between the physical parameters of gas extraction and the latent space parameters of the reduced-order model. The DNN-CAE model is applied to the reduced-order modeling of gas extraction flow&amp;ndash;solid coupling mathematical models in coal mines. A full-order model pressure field numerical dataset for gas extraction was constructed, and optimal hyperparameters for the pressure field reconstruction model and latent space parameter prediction model were determined through hyperparameter testing. The performance of the DNN-CAE model order reduction algorithm was compared to the POD-RBF model order reduction algorithm. The results indicate that the DNN-CAE method has certain advantages over the traditional POD-RBF method in terms of pressure field reconstruction accuracy, overall structure retention, extremum capture, and computational efficiency.</description> <pubDate>2024-11-16</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 733: Reduced-Order Model of Coal Seam Gas Extraction Pressure Distribution Based on Deep Neural Networks and Convolutional Autoencoders</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/733">doi: 10.3390/info15110733</a></p> <p>Authors: Tianxuan Hao Lizhen Zhao Yang Du Yiju Tang Fan Li Zehua Wang Xu Li </p> <p>There has been extensive research on the partial differential equations governing the theory of gas flow in coal mines. However, the traditional Proper Orthogonal Decomposition&amp;ndash;Radial Basis Function (POD-RBF) reduced-order algorithm requires significant computational resources and is inefficient when calculating high-dimensional data for coal mine gas pressure fields. To achieve the rapid computation of gas extraction pressure fields, this paper proposes a model reduction method based on deep neural networks (DNNs) and convolutional autoencoders (CAEs). The CAE is used to compress and reconstruct full-order numerical solutions for coal mine gas extraction, while the DNN is employed to establish the nonlinear mapping between the physical parameters of gas extraction and the latent space parameters of the reduced-order model. The DNN-CAE model is applied to the reduced-order modeling of gas extraction flow&amp;ndash;solid coupling mathematical models in coal mines. A full-order model pressure field numerical dataset for gas extraction was constructed, and optimal hyperparameters for the pressure field reconstruction model and latent space parameter prediction model were determined through hyperparameter testing. The performance of the DNN-CAE model order reduction algorithm was compared to the POD-RBF model order reduction algorithm. The results indicate that the DNN-CAE method has certain advantages over the traditional POD-RBF method in terms of pressure field reconstruction accuracy, overall structure retention, extremum capture, and computational efficiency.</p> ]]></content:encoded> <dc:title>Reduced-Order Model of Coal Seam Gas Extraction Pressure Distribution Based on Deep Neural Networks and Convolutional Autoencoders</dc:title> <dc:creator>Tianxuan Hao</dc:creator> <dc:creator>Lizhen Zhao</dc:creator> <dc:creator>Yang Du</dc:creator> <dc:creator>Yiju Tang</dc:creator> <dc:creator>Fan Li</dc:creator> <dc:creator>Zehua Wang</dc:creator> <dc:creator>Xu Li</dc:creator> <dc:identifier>doi: 10.3390/info15110733</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-16</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-16</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>733</prism:startingPage> <prism:doi>10.3390/info15110733</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/733</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/732"> <title>Information, Vol. 15, Pages 732: Design Strategies to Minimize Mobile Usability Issues in Navigation Design Patterns</title> <link>https://www.mdpi.com/2078-2489/15/11/732</link> <description>Recent development in mobile technology has significantly improved the quality of life. Everyday life is increasingly becoming dependent on mobile devices as mobile applications are targeting the needs of the end users. However, many end users struggle with navigating mobile applications, leading to frustration, especially with sophisticated and unfamiliar interfaces. This study focuses on addressing specific usability issues in mobile applications by investigating the impact of introducing a floating action button (FAB) and icons with names at the bottom in popular applications such as YouTube, Plex, and IMDb. The current research includes three studies: Study-1 explores the navigation issues that users face; Study-2 measures the experiences of the users with improved navigation designs; and Study-3 compares the results of Study-1 and Study-2 to evaluate user experience with both existing and improved navigation designs. A total of 147 participants participated and the systems usability scale was used to evaluate the navigation design. The experiments indicated that the existing design patterns are complex and difficult to understand leading to user frustration compared to newly designed and improved navigation designed patterns. Moreover, the proposed newly designed navigation patterns improved the effectiveness, learnability, and usability. Consequently, the results highlight the imperativeness of effective navigation design in improving user satisfaction and lowering frustration with mobile applications.</description> <pubDate>2024-11-15</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 732: Design Strategies to Minimize Mobile Usability Issues in Navigation Design Patterns</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/732">doi: 10.3390/info15110732</a></p> <p>Authors: Muhammad Umar Ibrar Hussain Toqeer Mahmood Hamid Turab Mirza C. M. Nadeem Faisal </p> <p>Recent development in mobile technology has significantly improved the quality of life. Everyday life is increasingly becoming dependent on mobile devices as mobile applications are targeting the needs of the end users. However, many end users struggle with navigating mobile applications, leading to frustration, especially with sophisticated and unfamiliar interfaces. This study focuses on addressing specific usability issues in mobile applications by investigating the impact of introducing a floating action button (FAB) and icons with names at the bottom in popular applications such as YouTube, Plex, and IMDb. The current research includes three studies: Study-1 explores the navigation issues that users face; Study-2 measures the experiences of the users with improved navigation designs; and Study-3 compares the results of Study-1 and Study-2 to evaluate user experience with both existing and improved navigation designs. A total of 147 participants participated and the systems usability scale was used to evaluate the navigation design. The experiments indicated that the existing design patterns are complex and difficult to understand leading to user frustration compared to newly designed and improved navigation designed patterns. Moreover, the proposed newly designed navigation patterns improved the effectiveness, learnability, and usability. Consequently, the results highlight the imperativeness of effective navigation design in improving user satisfaction and lowering frustration with mobile applications.</p> ]]></content:encoded> <dc:title>Design Strategies to Minimize Mobile Usability Issues in Navigation Design Patterns</dc:title> <dc:creator>Muhammad Umar</dc:creator> <dc:creator>Ibrar Hussain</dc:creator> <dc:creator>Toqeer Mahmood</dc:creator> <dc:creator>Hamid Turab Mirza</dc:creator> <dc:creator>C. M. Nadeem Faisal</dc:creator> <dc:identifier>doi: 10.3390/info15110732</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-15</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-15</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>732</prism:startingPage> <prism:doi>10.3390/info15110732</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/732</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/731"> <title>Information, Vol. 15, Pages 731: Accurately Identifying Sound vs. Rotten Cranberries Using Convolutional Neural Network</title> <link>https://www.mdpi.com/2078-2489/15/11/731</link> <description>Cranberries, native to North America, are known for their nutritional value and human health benefits. One hurdle to commercial production is losses due to fruit rot. Cranberry fruit rot results from a complex of more than ten filamentous fungi, challenging breeding for resistance. Nonetheless, our collaborative breeding program has fruit rot resistance as a significant target. This program currently relies heavily on manual sorting of sound vs. rotten cranberries. This process is labor-intensive and time-consuming, prompting the need for an automated classification (sound vs. rotten) system. Although many studies have focused on classifying different fruits and vegetables, no such approach has been developed for cranberries yet, partly because datasets are lacking for conducting the necessary image analyses. This research addresses this gap by introducing a novel image dataset comprising sound and rotten cranberries to facilitate computational analysis. In addition, we developed CARP (Cranberry Assessment for Rot Prediction), a convolutional neural network (CNN)-based model to distinguish sound cranberries from rotten ones. With an accuracy of 97.4%, a sensitivity of 97.2%, and a specificity of 97.2% on the training dataset and 94.8%, 95.4%, and 92.7% on the independent dataset, respectively, our proposed CNN model shows its effectiveness in accurately differentiating between sound and rotten cranberries.</description> <pubDate>2024-11-15</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 731: Accurately Identifying Sound vs. Rotten Cranberries Using Convolutional Neural Network</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/731">doi: 10.3390/info15110731</a></p> <p>Authors: Sayed Mehedi Azim Austin Spadaro Joseph Kawash James Polashock Iman Dehzangi </p> <p>Cranberries, native to North America, are known for their nutritional value and human health benefits. One hurdle to commercial production is losses due to fruit rot. Cranberry fruit rot results from a complex of more than ten filamentous fungi, challenging breeding for resistance. Nonetheless, our collaborative breeding program has fruit rot resistance as a significant target. This program currently relies heavily on manual sorting of sound vs. rotten cranberries. This process is labor-intensive and time-consuming, prompting the need for an automated classification (sound vs. rotten) system. Although many studies have focused on classifying different fruits and vegetables, no such approach has been developed for cranberries yet, partly because datasets are lacking for conducting the necessary image analyses. This research addresses this gap by introducing a novel image dataset comprising sound and rotten cranberries to facilitate computational analysis. In addition, we developed CARP (Cranberry Assessment for Rot Prediction), a convolutional neural network (CNN)-based model to distinguish sound cranberries from rotten ones. With an accuracy of 97.4%, a sensitivity of 97.2%, and a specificity of 97.2% on the training dataset and 94.8%, 95.4%, and 92.7% on the independent dataset, respectively, our proposed CNN model shows its effectiveness in accurately differentiating between sound and rotten cranberries.</p> ]]></content:encoded> <dc:title>Accurately Identifying Sound vs. Rotten Cranberries Using Convolutional Neural Network</dc:title> <dc:creator>Sayed Mehedi Azim</dc:creator> <dc:creator>Austin Spadaro</dc:creator> <dc:creator>Joseph Kawash</dc:creator> <dc:creator>James Polashock</dc:creator> <dc:creator>Iman Dehzangi</dc:creator> <dc:identifier>doi: 10.3390/info15110731</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-15</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-15</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>731</prism:startingPage> <prism:doi>10.3390/info15110731</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/731</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/730"> <title>Information, Vol. 15, Pages 730: Innovative Transitions: Exploring Demand for Smart City Development in Novi Sad as a European Capital of Culture</title> <link>https://www.mdpi.com/2078-2489/15/11/730</link> <description>This study investigates the factors influencing the acceptance and implementation of smart city solutions, with a particular focus on smart mobility and digital services in Novi Sad, one of the leading urban centers in Serbia. Employing a quantitative methodology, the research encompasses citizens&amp;rsquo; perceptions of the benefits of smart technologies, their level of awareness regarding smart solutions, the degree of engagement in using digital services, and their interest in smart mobility. The results indicate that these factors are crucial for the successful integration of smart technologies. Notably, awareness of smart city initiatives and the perceived benefits, such as improved mobility, reduced traffic congestion, increased energy efficiency, and enhanced quality of life, are highlighted as key prerequisites for the adoption of these solutions. Novi Sad, as the European Capital of Culture in 2022, presents a unique opportunity for the implementation of these technologies. Our findings point to the need for strategic campaigns aimed at educating and raising public awareness. The practical implications of this study could contribute to shaping policies that encourage the development of smart cities, not only in Novi Sad but also in other urban areas across Serbia and the region. This study confirms the importance of citizen engagement and technological literacy in the transformation of urban environments through smart solutions, underscoring the potential of these technologies to improve everyday life and achieve sustainable urban development.</description> <pubDate>2024-11-15</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 730: Innovative Transitions: Exploring Demand for Smart City Development in Novi Sad as a European Capital of Culture</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/730">doi: 10.3390/info15110730</a></p> <p>Authors: Minja Bolesnikov Mario Sili膰 Dario Sili膰 Boris Dumni膰 Jelena 膯ulibrk Maja Petrovi膰 Tamara Gaji膰 </p> <p>This study investigates the factors influencing the acceptance and implementation of smart city solutions, with a particular focus on smart mobility and digital services in Novi Sad, one of the leading urban centers in Serbia. Employing a quantitative methodology, the research encompasses citizens&amp;rsquo; perceptions of the benefits of smart technologies, their level of awareness regarding smart solutions, the degree of engagement in using digital services, and their interest in smart mobility. The results indicate that these factors are crucial for the successful integration of smart technologies. Notably, awareness of smart city initiatives and the perceived benefits, such as improved mobility, reduced traffic congestion, increased energy efficiency, and enhanced quality of life, are highlighted as key prerequisites for the adoption of these solutions. Novi Sad, as the European Capital of Culture in 2022, presents a unique opportunity for the implementation of these technologies. Our findings point to the need for strategic campaigns aimed at educating and raising public awareness. The practical implications of this study could contribute to shaping policies that encourage the development of smart cities, not only in Novi Sad but also in other urban areas across Serbia and the region. This study confirms the importance of citizen engagement and technological literacy in the transformation of urban environments through smart solutions, underscoring the potential of these technologies to improve everyday life and achieve sustainable urban development.</p> ]]></content:encoded> <dc:title>Innovative Transitions: Exploring Demand for Smart City Development in Novi Sad as a European Capital of Culture</dc:title> <dc:creator>Minja Bolesnikov</dc:creator> <dc:creator>Mario Sili膰</dc:creator> <dc:creator>Dario Sili膰</dc:creator> <dc:creator>Boris Dumni膰</dc:creator> <dc:creator>Jelena 膯ulibrk</dc:creator> <dc:creator>Maja Petrovi膰</dc:creator> <dc:creator>Tamara Gaji膰</dc:creator> <dc:identifier>doi: 10.3390/info15110730</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-15</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-15</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>730</prism:startingPage> <prism:doi>10.3390/info15110730</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/730</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/729"> <title>Information, Vol. 15, Pages 729: The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home</title> <link>https://www.mdpi.com/2078-2489/15/11/729</link> <description>The aging population, combined with the scarcity of healthcare resources, presents significant challenges for our society. The use of artificial intelligence (AI) and robotics offers a potential solution to these challenges. However, such technologies also raise ethical and cybersecurity concerns related to the preservation of privacy, autonomy, and human contact. In this case study, we examine these ethical challenges and the opportunities brought by AI and robotics in the care of old individuals at home. This article aims to describe the current fragmented state of legislation related to the development and use of AI-based services and robotics and to reflect on their ethics and cybersecurity. The findings indicate that, guided by ethical principles, we can leverage the best aspects of technology while ensuring that old people can maintain a dignified and valued life at home. The careful handling of ethical issues should be viewed as a competitive advantage and opportunity, rather than a burden.</description> <pubDate>2024-11-15</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 729: The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/729">doi: 10.3390/info15110729</a></p> <p>Authors: Jyri Rajam盲ki Jaakko Helin </p> <p>The aging population, combined with the scarcity of healthcare resources, presents significant challenges for our society. The use of artificial intelligence (AI) and robotics offers a potential solution to these challenges. However, such technologies also raise ethical and cybersecurity concerns related to the preservation of privacy, autonomy, and human contact. In this case study, we examine these ethical challenges and the opportunities brought by AI and robotics in the care of old individuals at home. This article aims to describe the current fragmented state of legislation related to the development and use of AI-based services and robotics and to reflect on their ethics and cybersecurity. The findings indicate that, guided by ethical principles, we can leverage the best aspects of technology while ensuring that old people can maintain a dignified and valued life at home. The careful handling of ethical issues should be viewed as a competitive advantage and opportunity, rather than a burden.</p> ]]></content:encoded> <dc:title>The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home</dc:title> <dc:creator>Jyri Rajam盲ki</dc:creator> <dc:creator>Jaakko Helin</dc:creator> <dc:identifier>doi: 10.3390/info15110729</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-15</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-15</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>729</prism:startingPage> <prism:doi>10.3390/info15110729</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/729</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/728"> <title>Information, Vol. 15, Pages 728: Collaborative Intelligence for Safety-Critical Industries: A Literature Review</title> <link>https://www.mdpi.com/2078-2489/15/11/728</link> <description>While AI-driven automation can increase the performance and safety of systems, humans should not be replaced in safety-critical systems but should be integrated to collaborate and mitigate each other&amp;rsquo;s limitations. The current trend in Industry 5.0 is towards human-centric collaborative paradigms, with an emphasis on collaborative intelligence (CI) or Hybrid Intelligent Systems. In this survey, we search and review recent work that employs AI methods for collaborative intelligence applications, specifically those that focus on safety and safety-critical industries. We aim to contribute to the research landscape and industry by compiling and analyzing a range of scenarios where AI can be used to achieve more efficient human&amp;ndash;machine interactions, improved collaboration, coordination, and safety. We define a domain-focused taxonomy to categorize the diverse CI solutions, based on the type of collaborative interaction between intelligent systems and humans, the AI paradigm used and the domain of the AI problem, while highlighting safety issues. We investigate 91 articles on CI research published between 2014 and 2023, providing insights into the trends, gaps, and techniques used, to guide recommendations for future research opportunities in the fast developing collaborative intelligence field.</description> <pubDate>2024-11-12</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 728: Collaborative Intelligence for Safety-Critical Industries: A Literature Review</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/728">doi: 10.3390/info15110728</a></p> <p>Authors: In锚s F. Ramos Gabriele Gianini Maria Chiara Leva Ernesto Damiani </p> <p>While AI-driven automation can increase the performance and safety of systems, humans should not be replaced in safety-critical systems but should be integrated to collaborate and mitigate each other&amp;rsquo;s limitations. The current trend in Industry 5.0 is towards human-centric collaborative paradigms, with an emphasis on collaborative intelligence (CI) or Hybrid Intelligent Systems. In this survey, we search and review recent work that employs AI methods for collaborative intelligence applications, specifically those that focus on safety and safety-critical industries. We aim to contribute to the research landscape and industry by compiling and analyzing a range of scenarios where AI can be used to achieve more efficient human&amp;ndash;machine interactions, improved collaboration, coordination, and safety. We define a domain-focused taxonomy to categorize the diverse CI solutions, based on the type of collaborative interaction between intelligent systems and humans, the AI paradigm used and the domain of the AI problem, while highlighting safety issues. We investigate 91 articles on CI research published between 2014 and 2023, providing insights into the trends, gaps, and techniques used, to guide recommendations for future research opportunities in the fast developing collaborative intelligence field.</p> ]]></content:encoded> <dc:title>Collaborative Intelligence for Safety-Critical Industries: A Literature Review</dc:title> <dc:creator>In锚s F. Ramos</dc:creator> <dc:creator>Gabriele Gianini</dc:creator> <dc:creator>Maria Chiara Leva</dc:creator> <dc:creator>Ernesto Damiani</dc:creator> <dc:identifier>doi: 10.3390/info15110728</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-12</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-12</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>728</prism:startingPage> <prism:doi>10.3390/info15110728</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/728</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/727"> <title>Information, Vol. 15, Pages 727: Analysis of Quantum-Classical Hybrid Deep Learning for 6G Image Processing with Copyright Detection</title> <link>https://www.mdpi.com/2078-2489/15/11/727</link> <description>This study investigates the integration of quantum computing, classical methods, and deep learning techniques for enhanced image processing in dynamic 6G networks, while also addressing essential aspects of copyright technology and detection. Our findings indicate that quantum methods excel in rapid edge detection and feature extraction but encounter difficulties in maintaining image quality compared to classical approaches. In contrast, classical methods preserve higher image fidelity but struggle to satisfy the real-time processing requirements of 6G applications. Deep learning techniques, particularly CNNs, demonstrate potential in complex image analysis tasks but demand substantial computational resources. To promote the ethical use of AI-generated images, we introduce copyright detection mechanisms that employ advanced algorithms to identify potential infringements in generated content. This integration improves adherence to intellectual property rights and legal standards, supporting the responsible implementation of image processing technologies. We suggest that the future of image processing in 6G networks resides in hybrid systems that effectively utilize the strengths of each approach while incorporating robust copyright detection capabilities. These insights contribute to the development of efficient, high-performance image processing systems in next-generation networks, highlighting the promise of integrated quantum-classical&amp;ndash;classical deep learning architectures within 6G environments.</description> <pubDate>2024-11-12</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 727: Analysis of Quantum-Classical Hybrid Deep Learning for 6G Image Processing with Copyright Detection</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/727">doi: 10.3390/info15110727</a></p> <p>Authors: Jongho Seol Hye-Young Kim Abhilash Kancharla Jongyeop Kim </p> <p>This study investigates the integration of quantum computing, classical methods, and deep learning techniques for enhanced image processing in dynamic 6G networks, while also addressing essential aspects of copyright technology and detection. Our findings indicate that quantum methods excel in rapid edge detection and feature extraction but encounter difficulties in maintaining image quality compared to classical approaches. In contrast, classical methods preserve higher image fidelity but struggle to satisfy the real-time processing requirements of 6G applications. Deep learning techniques, particularly CNNs, demonstrate potential in complex image analysis tasks but demand substantial computational resources. To promote the ethical use of AI-generated images, we introduce copyright detection mechanisms that employ advanced algorithms to identify potential infringements in generated content. This integration improves adherence to intellectual property rights and legal standards, supporting the responsible implementation of image processing technologies. We suggest that the future of image processing in 6G networks resides in hybrid systems that effectively utilize the strengths of each approach while incorporating robust copyright detection capabilities. These insights contribute to the development of efficient, high-performance image processing systems in next-generation networks, highlighting the promise of integrated quantum-classical&amp;ndash;classical deep learning architectures within 6G environments.</p> ]]></content:encoded> <dc:title>Analysis of Quantum-Classical Hybrid Deep Learning for 6G Image Processing with Copyright Detection</dc:title> <dc:creator>Jongho Seol</dc:creator> <dc:creator>Hye-Young Kim</dc:creator> <dc:creator>Abhilash Kancharla</dc:creator> <dc:creator>Jongyeop Kim</dc:creator> <dc:identifier>doi: 10.3390/info15110727</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-12</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-12</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>727</prism:startingPage> <prism:doi>10.3390/info15110727</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/727</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/726"> <title>Information, Vol. 15, Pages 726: Sprint Management in Agile Approach: Progress and Velocity Evaluation Applying Machine Learning</title> <link>https://www.mdpi.com/2078-2489/15/11/726</link> <description>Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace of work in Sprints is particularly important. In this work, a data model was developed to evaluate the progress and pace of work, based on the visual interpretation of numerical data from certain graphs that allow tracking, such as the Burndown chart. Additionally, experiments with machine learning algorithms were carried out to validate the effectiveness and potential improvements facilitated by this dataset development.</description> <pubDate>2024-11-12</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 726: Sprint Management in Agile Approach: Progress and Velocity Evaluation Applying Machine Learning</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/726">doi: 10.3390/info15110726</a></p> <p>Authors: Yadira Jazm铆n P茅rez Castillo Sandra Dinora Orantes Jim茅nez Patricio Orlando Letelier Torres </p> <p>Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace of work in Sprints is particularly important. In this work, a data model was developed to evaluate the progress and pace of work, based on the visual interpretation of numerical data from certain graphs that allow tracking, such as the Burndown chart. Additionally, experiments with machine learning algorithms were carried out to validate the effectiveness and potential improvements facilitated by this dataset development.</p> ]]></content:encoded> <dc:title>Sprint Management in Agile Approach: Progress and Velocity Evaluation Applying Machine Learning</dc:title> <dc:creator>Yadira Jazm铆n P茅rez Castillo</dc:creator> <dc:creator>Sandra Dinora Orantes Jim茅nez</dc:creator> <dc:creator>Patricio Orlando Letelier Torres</dc:creator> <dc:identifier>doi: 10.3390/info15110726</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-12</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-12</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>726</prism:startingPage> <prism:doi>10.3390/info15110726</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/726</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/725"> <title>Information, Vol. 15, Pages 725: AI for Decision Support: Balancing Accuracy, Transparency, and Trust Across Sectors</title> <link>https://www.mdpi.com/2078-2489/15/11/725</link> <description>This study seeks to understand the key success factors that underpin efficiency, transparency, and user trust in automated decision support systems (DSS) that leverage AI technologies across industries. The aim of this study is to facilitate more accurate decision-making with such AI-based DSS, as well as build trust through the need for visibility and explainability by increasing user acceptance. This study primarily examines the nature of AI-based DSS adoption and the challenges of maintaining system transparency and improving accuracy. The results provide practical guidance for professionals and decision-makers to develop AI-driven decision support systems that are not only effective but also trusted by users. The results are also important to gain insight into how artificial intelligence fits into and combines with decision-making, which can be derived from research when thinking about embedding systems in ethical standards.</description> <pubDate>2024-11-11</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 725: AI for Decision Support: Balancing Accuracy, Transparency, and Trust Across Sectors</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/725">doi: 10.3390/info15110725</a></p> <p>Authors: Attila Kovari </p> <p>This study seeks to understand the key success factors that underpin efficiency, transparency, and user trust in automated decision support systems (DSS) that leverage AI technologies across industries. The aim of this study is to facilitate more accurate decision-making with such AI-based DSS, as well as build trust through the need for visibility and explainability by increasing user acceptance. This study primarily examines the nature of AI-based DSS adoption and the challenges of maintaining system transparency and improving accuracy. The results provide practical guidance for professionals and decision-makers to develop AI-driven decision support systems that are not only effective but also trusted by users. The results are also important to gain insight into how artificial intelligence fits into and combines with decision-making, which can be derived from research when thinking about embedding systems in ethical standards.</p> ]]></content:encoded> <dc:title>AI for Decision Support: Balancing Accuracy, Transparency, and Trust Across Sectors</dc:title> <dc:creator>Attila Kovari</dc:creator> <dc:identifier>doi: 10.3390/info15110725</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-11</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-11</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>725</prism:startingPage> <prism:doi>10.3390/info15110725</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/725</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/724"> <title>Information, Vol. 15, Pages 724: Optimization of Business Processes Through BPM Methodology: A Case Study on Data Analysis and Performance Improvement</title> <link>https://www.mdpi.com/2078-2489/15/11/724</link> <description>This study explores the application of the BPM lifecycle to optimize the market analysis process within the market intelligence department of a major energy company. The semi-structured, virtual nature of the process necessitated careful adaptation of BPM methodology, starting with process discovery through data collection, modeling, and validation. Qualitative analysis, including value-added and root-cause analysis, revealed inefficiencies. The redesign strategy focused on selective automation using Python 3.10 scripts and Power BI dashboards, incorporating techniques such as linear programming and forecasting to improve process efficiency and quality while maintaining flexibility. Post-implementation, monitoring through a questionnaire showed positive results, though ongoing interviews were recommended for sustained performance evaluation. This study highlights the value of BPM methodology in enhancing decision-critical processes and offers a model for adaptable, value-driven process improvements in complex organizational environments.</description> <pubDate>2024-11-11</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 724: Optimization of Business Processes Through BPM Methodology: A Case Study on Data Analysis and Performance Improvement</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/724">doi: 10.3390/info15110724</a></p> <p>Authors: Ant贸nio Ricardo Teixeira Jos茅 Vasconcelos Ferreira Ana Lu铆sa Ramos </p> <p>This study explores the application of the BPM lifecycle to optimize the market analysis process within the market intelligence department of a major energy company. The semi-structured, virtual nature of the process necessitated careful adaptation of BPM methodology, starting with process discovery through data collection, modeling, and validation. Qualitative analysis, including value-added and root-cause analysis, revealed inefficiencies. The redesign strategy focused on selective automation using Python 3.10 scripts and Power BI dashboards, incorporating techniques such as linear programming and forecasting to improve process efficiency and quality while maintaining flexibility. Post-implementation, monitoring through a questionnaire showed positive results, though ongoing interviews were recommended for sustained performance evaluation. This study highlights the value of BPM methodology in enhancing decision-critical processes and offers a model for adaptable, value-driven process improvements in complex organizational environments.</p> ]]></content:encoded> <dc:title>Optimization of Business Processes Through BPM Methodology: A Case Study on Data Analysis and Performance Improvement</dc:title> <dc:creator>Ant贸nio Ricardo Teixeira</dc:creator> <dc:creator>Jos茅 Vasconcelos Ferreira</dc:creator> <dc:creator>Ana Lu铆sa Ramos</dc:creator> <dc:identifier>doi: 10.3390/info15110724</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-11</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-11</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>724</prism:startingPage> <prism:doi>10.3390/info15110724</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/724</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/723"> <title>Information, Vol. 15, Pages 723: Privacy-Preserving ConvMixer Without Any Accuracy Degradation Using Compressible Encrypted Images</title> <link>https://www.mdpi.com/2078-2489/15/11/723</link> <description>We propose an enhanced privacy-preserving method for image classification using ConvMixer, which is an extremely simple model that is similar in spirit to the Vision Transformer (ViT). Most privacy-preserving methods using encrypted images cause the performance of models to degrade due to the influence of encryption, but a state-of-the-art method was demonstrated to have the same classification accuracy as that of models without any encryption under the use of ViT. However, the method, in which a common secret key is assigned to each patch, is not robust enough against ciphertext-only attacks (COAs) including jigsaw puzzle solver attacks if compressible encrypted images are used. In addition, ConvMixer is less robust than ViT because there is no position embedding. To overcome this issue, we propose a novel block-wise encryption method that allows us to assign an independent key to each patch to enhance robustness against attacks. In experiments, the effectiveness of the method is verified in terms of image classification accuracy and robustness, and it is compared with conventional privacy-preserving methods using image encryption.</description> <pubDate>2024-11-11</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 723: Privacy-Preserving ConvMixer Without Any Accuracy Degradation Using Compressible Encrypted Images</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/723">doi: 10.3390/info15110723</a></p> <p>Authors: Haiwei Lin Shoko Imaizumi Hitoshi Kiya </p> <p>We propose an enhanced privacy-preserving method for image classification using ConvMixer, which is an extremely simple model that is similar in spirit to the Vision Transformer (ViT). Most privacy-preserving methods using encrypted images cause the performance of models to degrade due to the influence of encryption, but a state-of-the-art method was demonstrated to have the same classification accuracy as that of models without any encryption under the use of ViT. However, the method, in which a common secret key is assigned to each patch, is not robust enough against ciphertext-only attacks (COAs) including jigsaw puzzle solver attacks if compressible encrypted images are used. In addition, ConvMixer is less robust than ViT because there is no position embedding. To overcome this issue, we propose a novel block-wise encryption method that allows us to assign an independent key to each patch to enhance robustness against attacks. In experiments, the effectiveness of the method is verified in terms of image classification accuracy and robustness, and it is compared with conventional privacy-preserving methods using image encryption.</p> ]]></content:encoded> <dc:title>Privacy-Preserving ConvMixer Without Any Accuracy Degradation Using Compressible Encrypted Images</dc:title> <dc:creator>Haiwei Lin</dc:creator> <dc:creator>Shoko Imaizumi</dc:creator> <dc:creator>Hitoshi Kiya</dc:creator> <dc:identifier>doi: 10.3390/info15110723</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-11</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-11</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>723</prism:startingPage> <prism:doi>10.3390/info15110723</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/723</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/722"> <title>Information, Vol. 15, Pages 722: Malware Classification Using Few-Shot Learning Approach</title> <link>https://www.mdpi.com/2078-2489/15/11/722</link> <description>Malware detection, targeting the microarchitecture of processors, has recently come to light as a potentially effective way to improve computer system security. Hardware Performance Counter data are used by machine learning algorithms in security mechanisms, such as hardware-based malware detection, to categorize and detect malware. It is crucial to determine whether or not a file contains malware. Many issues have been brought about by the rise in malware, and businesses are losing vital data and dealing with other issues. The second thing to keep in mind is that malware can quickly cause a lot of damage to a system by slowing it down and encrypting a large amount of data on a personal computer. This study provides extensive details on a flexible framework related to machine learning and deep learning techniques using few-shot learning. Malware detection is possible using DT, RF, LR, SVM, and FSL techniques. The logic is that these algorithms make it simple to differentiate between files that are malware-free and those that are not. This indicates that their goal is to reduce the number of false positives in the data. For this, we use two different datasets from an online platform. In this research work, we mainly focus on few-shot learning techniques by using two different datasets. The proposed model has an 97% accuracy rate, which is much greater than that of other techniques.</description> <pubDate>2024-11-11</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 722: Malware Classification Using Few-Shot Learning Approach</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/722">doi: 10.3390/info15110722</a></p> <p>Authors: Khalid Alfarsi Saim Rasheed Iftikhar Ahmad </p> <p>Malware detection, targeting the microarchitecture of processors, has recently come to light as a potentially effective way to improve computer system security. Hardware Performance Counter data are used by machine learning algorithms in security mechanisms, such as hardware-based malware detection, to categorize and detect malware. It is crucial to determine whether or not a file contains malware. Many issues have been brought about by the rise in malware, and businesses are losing vital data and dealing with other issues. The second thing to keep in mind is that malware can quickly cause a lot of damage to a system by slowing it down and encrypting a large amount of data on a personal computer. This study provides extensive details on a flexible framework related to machine learning and deep learning techniques using few-shot learning. Malware detection is possible using DT, RF, LR, SVM, and FSL techniques. The logic is that these algorithms make it simple to differentiate between files that are malware-free and those that are not. This indicates that their goal is to reduce the number of false positives in the data. For this, we use two different datasets from an online platform. In this research work, we mainly focus on few-shot learning techniques by using two different datasets. The proposed model has an 97% accuracy rate, which is much greater than that of other techniques.</p> ]]></content:encoded> <dc:title>Malware Classification Using Few-Shot Learning Approach</dc:title> <dc:creator>Khalid Alfarsi</dc:creator> <dc:creator>Saim Rasheed</dc:creator> <dc:creator>Iftikhar Ahmad</dc:creator> <dc:identifier>doi: 10.3390/info15110722</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-11</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-11</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>722</prism:startingPage> <prism:doi>10.3390/info15110722</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/722</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/721"> <title>Information, Vol. 15, Pages 721: Advances and Challenges in Automated Drowning Detection and Prevention Systems</title> <link>https://www.mdpi.com/2078-2489/15/11/721</link> <description>Drowning is among the most common reasons for children&amp;rsquo;s death aged one to fourteen around the globe, ranking as the third leading cause of unintentional injury death. With rising populations and the growing popularity of swimming pools in hotels and villas, the incidence of drowning has accelerated. Accordingly, the development of systems for detecting and preventing drowning has become increasingly critical to provide safe swimming settings. In this paper, we propose a comprehensive review of recent existing advancements in automated drowning detection and prevention systems. The existing approaches can be broadly categorized according to their objectives into two main groups: detection-based systems, which alert lifeguards or parents to perform manual rescues, and detection and rescue-based systems, which integrate detection with automatic rescue mechanisms. Automatic drowning detection approaches could be further categorized into computer vision-based approaches, where camera-captured images are analyzed by machine learning algorithms to detect instances of drowning, and sensing-based approaches, where sensing instruments are attached to swimmers to monitor their physical parameters. We explore the advantages and limitations of each approach. Additionally, we highlight technical challenges and unresolved issues related to this domain, such as data imbalance, accuracy, privacy concerns, and integration with rescue systems. We also identify future research opportunities, emphasizing the need for more advanced AI models, uniform datasets, and better integration of detection with autonomous rescue mechanisms. This study aims to provide a critical resource for researchers and practitioners, facilitating the development of more effective systems to enhance water safety and minimize drowning incidents.</description> <pubDate>2024-11-11</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 721: Advances and Challenges in Automated Drowning Detection and Prevention Systems</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/721">doi: 10.3390/info15110721</a></p> <p>Authors: Maad Shatnawi Frdoos Albreiki Ashwaq Alkhoori Mariam Alhebshi Anas Shatnawi </p> <p>Drowning is among the most common reasons for children&amp;rsquo;s death aged one to fourteen around the globe, ranking as the third leading cause of unintentional injury death. With rising populations and the growing popularity of swimming pools in hotels and villas, the incidence of drowning has accelerated. Accordingly, the development of systems for detecting and preventing drowning has become increasingly critical to provide safe swimming settings. In this paper, we propose a comprehensive review of recent existing advancements in automated drowning detection and prevention systems. The existing approaches can be broadly categorized according to their objectives into two main groups: detection-based systems, which alert lifeguards or parents to perform manual rescues, and detection and rescue-based systems, which integrate detection with automatic rescue mechanisms. Automatic drowning detection approaches could be further categorized into computer vision-based approaches, where camera-captured images are analyzed by machine learning algorithms to detect instances of drowning, and sensing-based approaches, where sensing instruments are attached to swimmers to monitor their physical parameters. We explore the advantages and limitations of each approach. Additionally, we highlight technical challenges and unresolved issues related to this domain, such as data imbalance, accuracy, privacy concerns, and integration with rescue systems. We also identify future research opportunities, emphasizing the need for more advanced AI models, uniform datasets, and better integration of detection with autonomous rescue mechanisms. This study aims to provide a critical resource for researchers and practitioners, facilitating the development of more effective systems to enhance water safety and minimize drowning incidents.</p> ]]></content:encoded> <dc:title>Advances and Challenges in Automated Drowning Detection and Prevention Systems</dc:title> <dc:creator>Maad Shatnawi</dc:creator> <dc:creator>Frdoos Albreiki</dc:creator> <dc:creator>Ashwaq Alkhoori</dc:creator> <dc:creator>Mariam Alhebshi</dc:creator> <dc:creator>Anas Shatnawi</dc:creator> <dc:identifier>doi: 10.3390/info15110721</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-11</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-11</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>721</prism:startingPage> <prism:doi>10.3390/info15110721</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/721</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/720"> <title>Information, Vol. 15, Pages 720: Deep Learning and Knowledge</title> <link>https://www.mdpi.com/2078-2489/15/11/720</link> <description>This paper considers the question of what kind of knowledge is produced by deep learning. Ryle&amp;rsquo;s concept of knowledge how is examined and is contrasted with knowledge with a rationale. It is then argued that deep neural networks do produce knowledge how, but, because of their opacity, they do not in general, though there may be some special cases to the contrary, produce knowledge with a rationale. It is concluded that the distinction between knowledge how and knowledge with a rationale is a useful one for judging whether a particular application of deep learning AI is appropriate.</description> <pubDate>2024-11-11</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 720: Deep Learning and Knowledge</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/720">doi: 10.3390/info15110720</a></p> <p>Authors: Donald Gillies </p> <p>This paper considers the question of what kind of knowledge is produced by deep learning. Ryle&amp;rsquo;s concept of knowledge how is examined and is contrasted with knowledge with a rationale. It is then argued that deep neural networks do produce knowledge how, but, because of their opacity, they do not in general, though there may be some special cases to the contrary, produce knowledge with a rationale. It is concluded that the distinction between knowledge how and knowledge with a rationale is a useful one for judging whether a particular application of deep learning AI is appropriate.</p> ]]></content:encoded> <dc:title>Deep Learning and Knowledge</dc:title> <dc:creator>Donald Gillies</dc:creator> <dc:identifier>doi: 10.3390/info15110720</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-11</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-11</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>720</prism:startingPage> <prism:doi>10.3390/info15110720</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/720</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/719"> <title>Information, Vol. 15, Pages 719: From Data to Diagnosis: Machine Learning Revolutionizes Epidemiological Predictions</title> <link>https://www.mdpi.com/2078-2489/15/11/719</link> <description>The outbreak of epidemiological diseases creates a major impact on humanity as well as on the world&amp;rsquo;s economy. The consequence of such infectious diseases affects the survival of mankind. The government has to stand up to the negative influence of these epidemiological diseases and facilitate society with medical resources and economical support. In recent times, COVID-19 has been one of the epidemiological diseases that created lethal effects and a greater slump in the economy. Therefore, the prediction of outbreaks is essential for epidemiological diseases. It may be either frequent or sudden infections in society. The unexpected raise in the application of prediction models in recent years is outstanding. A study on these epidemiological prediction models and their usage from the year 2018 onwards is highlighted in this article. The popularity of various prediction approaches is emphasized and summarized in this article.</description> <pubDate>2024-11-08</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 719: From Data to Diagnosis: Machine Learning Revolutionizes Epidemiological Predictions</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/719">doi: 10.3390/info15110719</a></p> <p>Authors: Abdul Aziz Abdul Rahman Gowri Rajasekaran Rathipriya Ramalingam Abdelrhman Meero Dhamodharavadhani Seetharaman </p> <p>The outbreak of epidemiological diseases creates a major impact on humanity as well as on the world&amp;rsquo;s economy. The consequence of such infectious diseases affects the survival of mankind. The government has to stand up to the negative influence of these epidemiological diseases and facilitate society with medical resources and economical support. In recent times, COVID-19 has been one of the epidemiological diseases that created lethal effects and a greater slump in the economy. Therefore, the prediction of outbreaks is essential for epidemiological diseases. It may be either frequent or sudden infections in society. The unexpected raise in the application of prediction models in recent years is outstanding. A study on these epidemiological prediction models and their usage from the year 2018 onwards is highlighted in this article. The popularity of various prediction approaches is emphasized and summarized in this article.</p> ]]></content:encoded> <dc:title>From Data to Diagnosis: Machine Learning Revolutionizes Epidemiological Predictions</dc:title> <dc:creator>Abdul Aziz Abdul Rahman</dc:creator> <dc:creator>Gowri Rajasekaran</dc:creator> <dc:creator>Rathipriya Ramalingam</dc:creator> <dc:creator>Abdelrhman Meero</dc:creator> <dc:creator>Dhamodharavadhani Seetharaman</dc:creator> <dc:identifier>doi: 10.3390/info15110719</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-08</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-08</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>719</prism:startingPage> <prism:doi>10.3390/info15110719</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/719</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/718"> <title>Information, Vol. 15, Pages 718: Lightweight Reference-Based Video Super-Resolution Using Deformable Convolution</title> <link>https://www.mdpi.com/2078-2489/15/11/718</link> <description>Super-resolution is a technique for generating a high-resolution image or video from a low-resolution counterpart by predicting natural and realistic texture information. It has various applications such as medical image analysis, surveillance, remote sensing, etc. However, traditional single-image super-resolution methods can lead to a blurry visual effect. Reference-based super-resolution methods have been proposed to recover detailed information accurately. In reference-based methods, a high-resolution image is also used as a reference in addition to the low-resolution input image. Reference-based methods aim at transferring high-resolution textures from the reference image to produce visually pleasing results. However, it requires texture alignment between low-resolution and reference images, which generally requires a lot of time and memory. This paper proposes a lightweight reference-based video super-resolution method using deformable convolution. The proposed method makes the reference-based super-resolution a technology that can be easily used even in environments with limited computational resources. To verify the effectiveness of the proposed method, we conducted experiments to compare the proposed method with baseline methods in two aspects: runtime and memory usage, in addition to accuracy. The experimental results showed that the proposed method restored a high-quality super-resolved image from a very low-resolution level in 0.0138 s using two NVIDIA RTX 2080 GPUs, much faster than the representative method.</description> <pubDate>2024-11-08</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 718: Lightweight Reference-Based Video Super-Resolution Using Deformable Convolution</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/718">doi: 10.3390/info15110718</a></p> <p>Authors: Tomo Miyazaki Zirui Guo Shinichiro Omachi </p> <p>Super-resolution is a technique for generating a high-resolution image or video from a low-resolution counterpart by predicting natural and realistic texture information. It has various applications such as medical image analysis, surveillance, remote sensing, etc. However, traditional single-image super-resolution methods can lead to a blurry visual effect. Reference-based super-resolution methods have been proposed to recover detailed information accurately. In reference-based methods, a high-resolution image is also used as a reference in addition to the low-resolution input image. Reference-based methods aim at transferring high-resolution textures from the reference image to produce visually pleasing results. However, it requires texture alignment between low-resolution and reference images, which generally requires a lot of time and memory. This paper proposes a lightweight reference-based video super-resolution method using deformable convolution. The proposed method makes the reference-based super-resolution a technology that can be easily used even in environments with limited computational resources. To verify the effectiveness of the proposed method, we conducted experiments to compare the proposed method with baseline methods in two aspects: runtime and memory usage, in addition to accuracy. The experimental results showed that the proposed method restored a high-quality super-resolved image from a very low-resolution level in 0.0138 s using two NVIDIA RTX 2080 GPUs, much faster than the representative method.</p> ]]></content:encoded> <dc:title>Lightweight Reference-Based Video Super-Resolution Using Deformable Convolution</dc:title> <dc:creator>Tomo Miyazaki</dc:creator> <dc:creator>Zirui Guo</dc:creator> <dc:creator>Shinichiro Omachi</dc:creator> <dc:identifier>doi: 10.3390/info15110718</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-08</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-08</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>718</prism:startingPage> <prism:doi>10.3390/info15110718</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/718</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/717"> <title>Information, Vol. 15, Pages 717: Variational Color Shift and Auto-Encoder Based on Large Separable Kernel Attention for Enhanced Text CAPTCHA Vulnerability Assessment</title> <link>https://www.mdpi.com/2078-2489/15/11/717</link> <description>Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces a novel color augmentation technique called Variational Color Shift (VCS) to boost the recognition accuracy of different networks. VCS generates a color shift of every input image and then resamples the image within that range to generate a new image, thus expanding the number of samples of the original dataset to improve training effectiveness. In contrast to Random Color Shift (RCS), which treats the color offsets as hyperparameters, VCS estimates color shifts by reparametrizing the points sampled from the uniform distribution using predicted offsets according to every image, which makes the color shifts learnable. To better balance the computation and performance, we also propose two variants of VCS: Sim-VCS and Dilated-VCS. In addition, to solve the overfitting problem caused by disturbances in text CAPTCHAs, we propose an Auto-Encoder (AE) based on Large Separable Kernel Attention (AE-LSKA) to replace the convolutional module with large kernels in the text CAPTCHA recognizer. This new module employs an AE to compress the interference while expanding the receptive field using Large Separable Kernel Attention (LSKA), reducing the impact of local interference on the model training and improving the overall perception of characters. The experimental results show that the recognition accuracy of the model after integrating the AE-LSKA module is improved by at least 15 percentage points on both M-CAPTCHA and P-CAPTCHA datasets. In addition, experimental results demonstrate that color augmentation using VCS is more effective in enhancing recognition, which has higher accuracy compared to RCS and PCA Color Shift (PCA-CS).</description> <pubDate>2024-11-07</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 717: Variational Color Shift and Auto-Encoder Based on Large Separable Kernel Attention for Enhanced Text CAPTCHA Vulnerability Assessment</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/717">doi: 10.3390/info15110717</a></p> <p>Authors: Xing Wan Juliana Johari Fazlina Ahmat Ruslan </p> <p>Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces a novel color augmentation technique called Variational Color Shift (VCS) to boost the recognition accuracy of different networks. VCS generates a color shift of every input image and then resamples the image within that range to generate a new image, thus expanding the number of samples of the original dataset to improve training effectiveness. In contrast to Random Color Shift (RCS), which treats the color offsets as hyperparameters, VCS estimates color shifts by reparametrizing the points sampled from the uniform distribution using predicted offsets according to every image, which makes the color shifts learnable. To better balance the computation and performance, we also propose two variants of VCS: Sim-VCS and Dilated-VCS. In addition, to solve the overfitting problem caused by disturbances in text CAPTCHAs, we propose an Auto-Encoder (AE) based on Large Separable Kernel Attention (AE-LSKA) to replace the convolutional module with large kernels in the text CAPTCHA recognizer. This new module employs an AE to compress the interference while expanding the receptive field using Large Separable Kernel Attention (LSKA), reducing the impact of local interference on the model training and improving the overall perception of characters. The experimental results show that the recognition accuracy of the model after integrating the AE-LSKA module is improved by at least 15 percentage points on both M-CAPTCHA and P-CAPTCHA datasets. In addition, experimental results demonstrate that color augmentation using VCS is more effective in enhancing recognition, which has higher accuracy compared to RCS and PCA Color Shift (PCA-CS).</p> ]]></content:encoded> <dc:title>Variational Color Shift and Auto-Encoder Based on Large Separable Kernel Attention for Enhanced Text CAPTCHA Vulnerability Assessment</dc:title> <dc:creator>Xing Wan</dc:creator> <dc:creator>Juliana Johari</dc:creator> <dc:creator>Fazlina Ahmat Ruslan</dc:creator> <dc:identifier>doi: 10.3390/info15110717</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-07</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-07</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>717</prism:startingPage> <prism:doi>10.3390/info15110717</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/717</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/716"> <title>Information, Vol. 15, Pages 716: Enabling Parallel Performance and Portability of Solid Mechanics Simulations Across CPU and GPU Architectures</title> <link>https://www.mdpi.com/2078-2489/15/11/716</link> <description>Efficiently simulating solid mechanics is vital across various engineering applications. As constitutive models grow more complex and simulations scale up in size, harnessing the capabilities of modern computer architectures has become essential for achieving timely results. This paper presents advancements in running parallel simulations of solid mechanics on multi-core CPUs and GPUs using a single-code implementation. This portability is made possible by the C++ matrix and array (MATAR) library, which interfaces with the C++ Kokkos library, enabling the selection of fine-grained parallelism backends (e.g., CUDA, HIP, OpenMP, pthreads, etc.) at compile time. MATAR simplifies the transition from Fortran to C++ and Kokkos, making it easier to modernize legacy solid mechanics codes. We applied this approach to modernize a suite of constitutive models and to demonstrate substantial performance improvements across different computer architectures. This paper includes comparative performance studies using multi-core CPUs along with AMD and NVIDIA GPUs. Results are presented using a hypoelastic&amp;ndash;plastic model, a crystal plasticity model, and the viscoplastic self-consistent generalized material model (VPSC-GMM). The results underscore the potential of using the MATAR library and modern computer architectures to accelerate solid mechanics simulations.</description> <pubDate>2024-11-07</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 716: Enabling Parallel Performance and Portability of Solid Mechanics Simulations Across CPU and GPU Architectures</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/716">doi: 10.3390/info15110716</a></p> <p>Authors: Nathaniel Morgan Caleb Yenusah Adrian Diaz Daniel Dunning Jacob Moore Erin Heilman Evan Lieberman Steven Walton Sarah Brown Daniel Holladay Russell Marki Robert Robey Marko Knezevic </p> <p>Efficiently simulating solid mechanics is vital across various engineering applications. As constitutive models grow more complex and simulations scale up in size, harnessing the capabilities of modern computer architectures has become essential for achieving timely results. This paper presents advancements in running parallel simulations of solid mechanics on multi-core CPUs and GPUs using a single-code implementation. This portability is made possible by the C++ matrix and array (MATAR) library, which interfaces with the C++ Kokkos library, enabling the selection of fine-grained parallelism backends (e.g., CUDA, HIP, OpenMP, pthreads, etc.) at compile time. MATAR simplifies the transition from Fortran to C++ and Kokkos, making it easier to modernize legacy solid mechanics codes. We applied this approach to modernize a suite of constitutive models and to demonstrate substantial performance improvements across different computer architectures. This paper includes comparative performance studies using multi-core CPUs along with AMD and NVIDIA GPUs. Results are presented using a hypoelastic&amp;ndash;plastic model, a crystal plasticity model, and the viscoplastic self-consistent generalized material model (VPSC-GMM). The results underscore the potential of using the MATAR library and modern computer architectures to accelerate solid mechanics simulations.</p> ]]></content:encoded> <dc:title>Enabling Parallel Performance and Portability of Solid Mechanics Simulations Across CPU and GPU Architectures</dc:title> <dc:creator>Nathaniel Morgan</dc:creator> <dc:creator>Caleb Yenusah</dc:creator> <dc:creator>Adrian Diaz</dc:creator> <dc:creator>Daniel Dunning</dc:creator> <dc:creator>Jacob Moore</dc:creator> <dc:creator>Erin Heilman</dc:creator> <dc:creator>Evan Lieberman</dc:creator> <dc:creator>Steven Walton</dc:creator> <dc:creator>Sarah Brown</dc:creator> <dc:creator>Daniel Holladay</dc:creator> <dc:creator>Russell Marki</dc:creator> <dc:creator>Robert Robey</dc:creator> <dc:creator>Marko Knezevic</dc:creator> <dc:identifier>doi: 10.3390/info15110716</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-07</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-07</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>716</prism:startingPage> <prism:doi>10.3390/info15110716</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/716</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/715"> <title>Information, Vol. 15, Pages 715: Two-Stage Combined Model for Short-Term Electricity Forecasting in Ports</title> <link>https://www.mdpi.com/2078-2489/15/11/715</link> <description>With an increasing emphasis on energy conservation, emission reduction, and power consumption management, port enterprises are focusing on enhancing their electricity load forecasting capabilities. Accurate electricity load forecasting is crucial for understanding power usage and optimizing energy allocation. This study introduces a novel approach that transcends the limitations of single prediction models by employing a Binary Fusion Weight Determination Method (BFWDM) to optimize and integrate three distinct prediction models: Temporal Pattern Attention Long Short-Term Memory (TPA-LSTM), Multi-Quantile Recurrent Neural Network (MQ-RNN), and Deep Factors. We propose a two-phase process for constructing an optimal combined forecasting model for port power load prediction. In the initial phase, individual prediction models generate preliminary outcomes. In the subsequent phase, these preliminary predictions are used to construct a combination forecasting model based on the BFWDM. The efficacy of the proposed model is validated using two actual port data, demonstrating high prediction accuracy with a Mean Absolute Percentage Error (MAPE) of only 6.23% and 7.94%. This approach not only enhances the prediction accuracy but also improves the adaptability and stability of the model compared to other existing models.</description> <pubDate>2024-11-07</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 715: Two-Stage Combined Model for Short-Term Electricity Forecasting in Ports</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/715">doi: 10.3390/info15110715</a></p> <p>Authors: Wentao Song Xiaohua Cao Hanrui Jiang Zejun Li Ruobin Gao </p> <p>With an increasing emphasis on energy conservation, emission reduction, and power consumption management, port enterprises are focusing on enhancing their electricity load forecasting capabilities. Accurate electricity load forecasting is crucial for understanding power usage and optimizing energy allocation. This study introduces a novel approach that transcends the limitations of single prediction models by employing a Binary Fusion Weight Determination Method (BFWDM) to optimize and integrate three distinct prediction models: Temporal Pattern Attention Long Short-Term Memory (TPA-LSTM), Multi-Quantile Recurrent Neural Network (MQ-RNN), and Deep Factors. We propose a two-phase process for constructing an optimal combined forecasting model for port power load prediction. In the initial phase, individual prediction models generate preliminary outcomes. In the subsequent phase, these preliminary predictions are used to construct a combination forecasting model based on the BFWDM. The efficacy of the proposed model is validated using two actual port data, demonstrating high prediction accuracy with a Mean Absolute Percentage Error (MAPE) of only 6.23% and 7.94%. This approach not only enhances the prediction accuracy but also improves the adaptability and stability of the model compared to other existing models.</p> ]]></content:encoded> <dc:title>Two-Stage Combined Model for Short-Term Electricity Forecasting in Ports</dc:title> <dc:creator>Wentao Song</dc:creator> <dc:creator>Xiaohua Cao</dc:creator> <dc:creator>Hanrui Jiang</dc:creator> <dc:creator>Zejun Li</dc:creator> <dc:creator>Ruobin Gao</dc:creator> <dc:identifier>doi: 10.3390/info15110715</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-07</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-07</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>715</prism:startingPage> <prism:doi>10.3390/info15110715</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/715</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/714"> <title>Information, Vol. 15, Pages 714: Maternal Nutritional Factors Enhance Birthweight Prediction: A Super Learner Ensemble Approach</title> <link>https://www.mdpi.com/2078-2489/15/11/714</link> <description>Birthweight (BW) is a widely used indicator of neonatal health, with low birthweight (LBW) being linked to higher risks of morbidity and mortality. Timely and precise prediction of LBW is crucial for ensuring newborn health and well-being. Despite recent machine learning advancements in BW classification based on physiological traits in the mother and ultrasound outcomes, maternal status in essential micronutrients for fetal development is yet to be fully exploited for BW prediction. This study aims to evaluate the impact of maternal nutritional factors, specifically mid-pregnancy plasma concentrations of vitamin B12, folate, and anemia on BW prediction. This study analyzed data from 729 pregnant women in Tarragona, Spain, for early BW prediction and analyzed each factor&amp;rsquo;s impact and contribution using a partial dependency plot and feature importance. Using a super learner ensemble method with tenfold cross-validation, the model achieved a prediction accuracy of 96.19% and an AUC-ROC of 0.96, outperforming single-model approaches. Vitamin B12 and folate status were identified as significant predictors, underscoring their importance in reducing LBW risk. The findings highlight the critical role of maternal nutritional factors in BW prediction and suggest that monitoring vitamin B12 and folate levels during pregnancy could enhance prenatal care and mitigate neonatal complications associated with LBW.</description> <pubDate>2024-11-06</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 714: Maternal Nutritional Factors Enhance Birthweight Prediction: A Super Learner Ensemble Approach</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/714">doi: 10.3390/info15110714</a></p> <p>Authors: Muhammad Mursil Hatem A. Rashwan Pere Cavall茅-Busquets Luis A. Santos-Calder贸n Michelle M. Murphy Domenec Puig </p> <p>Birthweight (BW) is a widely used indicator of neonatal health, with low birthweight (LBW) being linked to higher risks of morbidity and mortality. Timely and precise prediction of LBW is crucial for ensuring newborn health and well-being. Despite recent machine learning advancements in BW classification based on physiological traits in the mother and ultrasound outcomes, maternal status in essential micronutrients for fetal development is yet to be fully exploited for BW prediction. This study aims to evaluate the impact of maternal nutritional factors, specifically mid-pregnancy plasma concentrations of vitamin B12, folate, and anemia on BW prediction. This study analyzed data from 729 pregnant women in Tarragona, Spain, for early BW prediction and analyzed each factor&amp;rsquo;s impact and contribution using a partial dependency plot and feature importance. Using a super learner ensemble method with tenfold cross-validation, the model achieved a prediction accuracy of 96.19% and an AUC-ROC of 0.96, outperforming single-model approaches. Vitamin B12 and folate status were identified as significant predictors, underscoring their importance in reducing LBW risk. The findings highlight the critical role of maternal nutritional factors in BW prediction and suggest that monitoring vitamin B12 and folate levels during pregnancy could enhance prenatal care and mitigate neonatal complications associated with LBW.</p> ]]></content:encoded> <dc:title>Maternal Nutritional Factors Enhance Birthweight Prediction: A Super Learner Ensemble Approach</dc:title> <dc:creator>Muhammad Mursil</dc:creator> <dc:creator>Hatem A. Rashwan</dc:creator> <dc:creator>Pere Cavall茅-Busquets</dc:creator> <dc:creator>Luis A. Santos-Calder贸n</dc:creator> <dc:creator>Michelle M. Murphy</dc:creator> <dc:creator>Domenec Puig</dc:creator> <dc:identifier>doi: 10.3390/info15110714</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-06</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-06</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>714</prism:startingPage> <prism:doi>10.3390/info15110714</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/714</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/713"> <title>Information, Vol. 15, Pages 713: Best IDEAS: Special Issue of the International Database Engineered Applications Symposium</title> <link>https://www.mdpi.com/2078-2489/15/11/713</link> <description>Database engineered applications cover a broad range of topics including various design and maintenance methods, as well as data analytics and data mining algorithms and learning strategies for enterprise, distributed, or federated data stores [...]</description> <pubDate>2024-11-06</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 713: Best IDEAS: Special Issue of the International Database Engineered Applications Symposium</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/713">doi: 10.3390/info15110713</a></p> <p>Authors: Peter Z. Revesz </p> <p>Database engineered applications cover a broad range of topics including various design and maintenance methods, as well as data analytics and data mining algorithms and learning strategies for enterprise, distributed, or federated data stores [...]</p> ]]></content:encoded> <dc:title>Best IDEAS: Special Issue of the International Database Engineered Applications Symposium</dc:title> <dc:creator>Peter Z. Revesz</dc:creator> <dc:identifier>doi: 10.3390/info15110713</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-06</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-06</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Editorial</prism:section> <prism:startingPage>713</prism:startingPage> <prism:doi>10.3390/info15110713</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/713</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/712"> <title>Information, Vol. 15, Pages 712: Exploring the Features and Trends of Industrial Product E-Commerce in China Using Text-Mining Approaches</title> <link>https://www.mdpi.com/2078-2489/15/11/712</link> <description>Industrial product e-commerce refers to the specific application of the e-commerce concept in industrial product transactions. It enables industrial enterprises to conduct transactions via Internet platforms and reduce circulation and operating costs. Industrial literature, such as policies, reports, and standards related to industrial product e-commerce, contains much crucial information. Through a systematical analysis of this information, we can explore and comprehend the development characteristics and trends of industrial product e-commerce. To this end, 18 policy documents, 10 industrial reports, and five standards are analyzed by employing text-mining methods. Firstly, natural language processing (NLP) technology is utilized to pre-process the text data related to industrial product commerce. Then, word frequency statistics and TF-IDF keyword extraction are performed, and the word frequency statistics are visually represented. Subsequently, the feature set is obtained by combining these processes with the manual screening method. The original text corpus is used as the training set by employing the skip-gram model in Word2Vec, and the feature words are transformed into word vectors in the multi-dimensional space. The K-means algorithm is used to cluster the feature words into groups. The latent Dirichlet allocation (LDA) method is then utilized to further group and discover the features. The text-mining results provide evidence for the development characteristics and trends of industrial product e-commerce in China.</description> <pubDate>2024-11-06</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 712: Exploring the Features and Trends of Industrial Product E-Commerce in China Using Text-Mining Approaches</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/712">doi: 10.3390/info15110712</a></p> <p>Authors: Zhaoyang Sun Qi Zong Yuxin Mao Gongxing Wu </p> <p>Industrial product e-commerce refers to the specific application of the e-commerce concept in industrial product transactions. It enables industrial enterprises to conduct transactions via Internet platforms and reduce circulation and operating costs. Industrial literature, such as policies, reports, and standards related to industrial product e-commerce, contains much crucial information. Through a systematical analysis of this information, we can explore and comprehend the development characteristics and trends of industrial product e-commerce. To this end, 18 policy documents, 10 industrial reports, and five standards are analyzed by employing text-mining methods. Firstly, natural language processing (NLP) technology is utilized to pre-process the text data related to industrial product commerce. Then, word frequency statistics and TF-IDF keyword extraction are performed, and the word frequency statistics are visually represented. Subsequently, the feature set is obtained by combining these processes with the manual screening method. The original text corpus is used as the training set by employing the skip-gram model in Word2Vec, and the feature words are transformed into word vectors in the multi-dimensional space. The K-means algorithm is used to cluster the feature words into groups. The latent Dirichlet allocation (LDA) method is then utilized to further group and discover the features. The text-mining results provide evidence for the development characteristics and trends of industrial product e-commerce in China.</p> ]]></content:encoded> <dc:title>Exploring the Features and Trends of Industrial Product E-Commerce in China Using Text-Mining Approaches</dc:title> <dc:creator>Zhaoyang Sun</dc:creator> <dc:creator>Qi Zong</dc:creator> <dc:creator>Yuxin Mao</dc:creator> <dc:creator>Gongxing Wu</dc:creator> <dc:identifier>doi: 10.3390/info15110712</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-06</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-06</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>712</prism:startingPage> <prism:doi>10.3390/info15110712</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/712</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/711"> <title>Information, Vol. 15, Pages 711: Discrete Fourier Transform in Unmasking Deepfake Images: A Comparative Study of StyleGAN Creations</title> <link>https://www.mdpi.com/2078-2489/15/11/711</link> <description>This study proposes a novel forgery detection method based on the analysis of frequency components of images using the Discrete Fourier Transform (DFT). In recent years, face manipulation technologies, particularly Generative Adversarial Networks (GANs), have advanced to such an extent that their misuse, such as creating deepfakes indistinguishable to human observers, has become a significant societal concern. We reviewed two GAN architectures, StyleGAN and StyleGAN2, generating synthetic faces that were compared with real faces from the FFHQ and CelebA-HQ datasets. The key results demonstrate classification accuracies above 99%, with F1 scores of 99.94% for Support Vector Machines and 97.21% for Random Forest classifiers. These findings underline the fact that performing frequency analysis presents a superior approach to deepfake detection compared to traditional spatial detection methods. It provides insight into subtle manipulation cues in digital images and offers a scalable way to enhance security protocols amid rising digital impersonation threats.</description> <pubDate>2024-11-06</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 711: Discrete Fourier Transform in Unmasking Deepfake Images: A Comparative Study of StyleGAN Creations</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/711">doi: 10.3390/info15110711</a></p> <p>Authors: Vito Nicola Convertini Donato Impedovo Ugo Lopez Giuseppe Pirlo Gioacchino Sterlicchio </p> <p>This study proposes a novel forgery detection method based on the analysis of frequency components of images using the Discrete Fourier Transform (DFT). In recent years, face manipulation technologies, particularly Generative Adversarial Networks (GANs), have advanced to such an extent that their misuse, such as creating deepfakes indistinguishable to human observers, has become a significant societal concern. We reviewed two GAN architectures, StyleGAN and StyleGAN2, generating synthetic faces that were compared with real faces from the FFHQ and CelebA-HQ datasets. The key results demonstrate classification accuracies above 99%, with F1 scores of 99.94% for Support Vector Machines and 97.21% for Random Forest classifiers. These findings underline the fact that performing frequency analysis presents a superior approach to deepfake detection compared to traditional spatial detection methods. It provides insight into subtle manipulation cues in digital images and offers a scalable way to enhance security protocols amid rising digital impersonation threats.</p> ]]></content:encoded> <dc:title>Discrete Fourier Transform in Unmasking Deepfake Images: A Comparative Study of StyleGAN Creations</dc:title> <dc:creator>Vito Nicola Convertini</dc:creator> <dc:creator>Donato Impedovo</dc:creator> <dc:creator>Ugo Lopez</dc:creator> <dc:creator>Giuseppe Pirlo</dc:creator> <dc:creator>Gioacchino Sterlicchio</dc:creator> <dc:identifier>doi: 10.3390/info15110711</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-06</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-06</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>711</prism:startingPage> <prism:doi>10.3390/info15110711</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/711</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/710"> <title>Information, Vol. 15, Pages 710: Cybersecurity at Sea: A Literature Review of Cyber-Attack Impacts and Defenses in Maritime Supply Chains</title> <link>https://www.mdpi.com/2078-2489/15/11/710</link> <description>The maritime industry is constantly evolving and posing new challenges, especially with increasing digitalization, which has raised concerns about cyber-attacks on maritime supply chain agents. Although scholars have proposed various methods and classification models to counter these cyber threats, a comprehensive cyber-attack taxonomy for maritime supply chain actors based on a systematic literature review is still lacking. This review aims to provide a clear picture of common cyber-attacks and develop a taxonomy for their categorization. In addition, it outlines best practices derived from academic research in maritime cybersecurity using PRISMA principles for a systematic literature review, which identified 110 relevant journal papers. This study highlights that distributed denial of service (DDoS) attacks and malware are top concerns for all maritime supply chain stakeholders. In particular, shipping companies are urged to prioritize defenses against hijacking, spoofing, and jamming. The report identifies 18 practices to combat cyber-attacks, categorized into information security management solutions, information security policies, and cybersecurity awareness and training. Finally, this paper explores how emerging technologies can address cyber-attacks in the maritime supply chain network (MSCN). While Industry 4.0 technologies are highlighted as significant trends in the literature, this study aims to equip MSCN stakeholders with the knowledge to effectively leverage a broader range of emerging technologies. In doing so, it provides forward-looking solutions to prevent and mitigate cyber-attacks, emphasizing that Industry 4.0 is part of a larger landscape of technological innovation.</description> <pubDate>2024-11-06</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 710: Cybersecurity at Sea: A Literature Review of Cyber-Attack Impacts and Defenses in Maritime Supply Chains</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/710">doi: 10.3390/info15110710</a></p> <p>Authors: Maria Valentina Clavijo Mesa Carmen Elena Patino-Rodriguez Fernando Jesus Guevara Carazas </p> <p>The maritime industry is constantly evolving and posing new challenges, especially with increasing digitalization, which has raised concerns about cyber-attacks on maritime supply chain agents. Although scholars have proposed various methods and classification models to counter these cyber threats, a comprehensive cyber-attack taxonomy for maritime supply chain actors based on a systematic literature review is still lacking. This review aims to provide a clear picture of common cyber-attacks and develop a taxonomy for their categorization. In addition, it outlines best practices derived from academic research in maritime cybersecurity using PRISMA principles for a systematic literature review, which identified 110 relevant journal papers. This study highlights that distributed denial of service (DDoS) attacks and malware are top concerns for all maritime supply chain stakeholders. In particular, shipping companies are urged to prioritize defenses against hijacking, spoofing, and jamming. The report identifies 18 practices to combat cyber-attacks, categorized into information security management solutions, information security policies, and cybersecurity awareness and training. Finally, this paper explores how emerging technologies can address cyber-attacks in the maritime supply chain network (MSCN). While Industry 4.0 technologies are highlighted as significant trends in the literature, this study aims to equip MSCN stakeholders with the knowledge to effectively leverage a broader range of emerging technologies. In doing so, it provides forward-looking solutions to prevent and mitigate cyber-attacks, emphasizing that Industry 4.0 is part of a larger landscape of technological innovation.</p> ]]></content:encoded> <dc:title>Cybersecurity at Sea: A Literature Review of Cyber-Attack Impacts and Defenses in Maritime Supply Chains</dc:title> <dc:creator>Maria Valentina Clavijo Mesa</dc:creator> <dc:creator>Carmen Elena Patino-Rodriguez</dc:creator> <dc:creator>Fernando Jesus Guevara Carazas</dc:creator> <dc:identifier>doi: 10.3390/info15110710</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-06</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-06</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>710</prism:startingPage> <prism:doi>10.3390/info15110710</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/710</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/709"> <title>Information, Vol. 15, Pages 709: Pressure and Temperature Prediction of Oil Pipeline Networks Based on a Mechanism-Data Hybrid Driven Method</title> <link>https://www.mdpi.com/2078-2489/15/11/709</link> <description>To ensure the operational safety of oil transportation stations, it is crucial to predict the impact of pressure and temperature before crude oil enters the pipeline network. Accurate predictions enable the assessment of the pipeline&amp;rsquo;s load-bearing capacity and the prevention of potential safety incidents. Most existing studies primarily focus on describing and modeling the mechanisms of the oil flow process. However, monitoring data can be skewed by factors such as instrument aging and pipeline friction, leading to inaccurate predictions when relying solely on mechanistic or data-driven approaches. To address these limitations, this paper proposes a Temporal-Spatial Three-stream Temporal Convolutional Network (TS-TTCN) model that integrates mechanistic knowledge with data-driven methods. Building upon Temporal Convolutional Networks (TCN), the TS-TTCN model synthesizes mechanistic insights into the oil transport process to establish a hybrid driving mechanism. In the temporal dimension, it incorporates real-time operating parameters and applies temporal convolution techniques to capture the time-series characteristics of the oil transportation pipeline network. In the spatial dimension, it constructs a directed topological map based on the pipeline network&amp;rsquo;s node structure to characterize spatial features. Data analysis and experimental results show that the Three-stream Temporal Convolutional Network (TTCN) model, which uses a Tanh activation function, achieves an error rate below 5%. By analyzing and validating real-time data from the Dongying oil transportation station, the proposed hybrid model proves to be more stable, reliable, and accurate under varying operating conditions.</description> <pubDate>2024-11-05</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 709: Pressure and Temperature Prediction of Oil Pipeline Networks Based on a Mechanism-Data Hybrid Driven Method</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/709">doi: 10.3390/info15110709</a></p> <p>Authors: Faming Gong Xingfang Zhao Chengze Du Kaiwen Zheng Zhuang Shi Hao Wang </p> <p>To ensure the operational safety of oil transportation stations, it is crucial to predict the impact of pressure and temperature before crude oil enters the pipeline network. Accurate predictions enable the assessment of the pipeline&amp;rsquo;s load-bearing capacity and the prevention of potential safety incidents. Most existing studies primarily focus on describing and modeling the mechanisms of the oil flow process. However, monitoring data can be skewed by factors such as instrument aging and pipeline friction, leading to inaccurate predictions when relying solely on mechanistic or data-driven approaches. To address these limitations, this paper proposes a Temporal-Spatial Three-stream Temporal Convolutional Network (TS-TTCN) model that integrates mechanistic knowledge with data-driven methods. Building upon Temporal Convolutional Networks (TCN), the TS-TTCN model synthesizes mechanistic insights into the oil transport process to establish a hybrid driving mechanism. In the temporal dimension, it incorporates real-time operating parameters and applies temporal convolution techniques to capture the time-series characteristics of the oil transportation pipeline network. In the spatial dimension, it constructs a directed topological map based on the pipeline network&amp;rsquo;s node structure to characterize spatial features. Data analysis and experimental results show that the Three-stream Temporal Convolutional Network (TTCN) model, which uses a Tanh activation function, achieves an error rate below 5%. By analyzing and validating real-time data from the Dongying oil transportation station, the proposed hybrid model proves to be more stable, reliable, and accurate under varying operating conditions.</p> ]]></content:encoded> <dc:title>Pressure and Temperature Prediction of Oil Pipeline Networks Based on a Mechanism-Data Hybrid Driven Method</dc:title> <dc:creator>Faming Gong</dc:creator> <dc:creator>Xingfang Zhao</dc:creator> <dc:creator>Chengze Du</dc:creator> <dc:creator>Kaiwen Zheng</dc:creator> <dc:creator>Zhuang Shi</dc:creator> <dc:creator>Hao Wang</dc:creator> <dc:identifier>doi: 10.3390/info15110709</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-05</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-05</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>709</prism:startingPage> <prism:doi>10.3390/info15110709</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/709</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/708"> <title>Information, Vol. 15, Pages 708: A Hybrid Semantic Representation Method Based on Fusion Conceptual Knowledge and Weighted Word Embeddings for English Texts</title> <link>https://www.mdpi.com/2078-2489/15/11/708</link> <description>The accuracy of traditional topic models may be compromised due to the sparsity of co-occurring vocabulary in the corpus, whereas conventional word embedding models tend to excessively prioritize contextual semantic information and inadequately capture domain-specific features in the text. This paper proposes a hybrid semantic representation method that combines a topic model that integrates conceptual knowledge with a weighted word embedding model. Specifically, we construct a topic model incorporating the Probase concept knowledge base to perform topic clustering and obtain topic semantic representation. Additionally, we design a weighted word embedding model to enhance the contextual semantic information representation of the text. The feature-based information fusion model is employed to integrate the two textual representations and generate a hybrid semantic representation. The hybrid semantic representation model proposed in this study was evaluated based on various English composition test sets. The findings demonstrate that the model presented in this paper exhibits superior accuracy and practical value compared to existing text representation methods.</description> <pubDate>2024-11-05</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 708: A Hybrid Semantic Representation Method Based on Fusion Conceptual Knowledge and Weighted Word Embeddings for English Texts</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/708">doi: 10.3390/info15110708</a></p> <p>Authors: Zan Qiu Guimin Huang Xingguo Qin Yabing Wang Jiahao Wang Ya Zhou </p> <p>The accuracy of traditional topic models may be compromised due to the sparsity of co-occurring vocabulary in the corpus, whereas conventional word embedding models tend to excessively prioritize contextual semantic information and inadequately capture domain-specific features in the text. This paper proposes a hybrid semantic representation method that combines a topic model that integrates conceptual knowledge with a weighted word embedding model. Specifically, we construct a topic model incorporating the Probase concept knowledge base to perform topic clustering and obtain topic semantic representation. Additionally, we design a weighted word embedding model to enhance the contextual semantic information representation of the text. The feature-based information fusion model is employed to integrate the two textual representations and generate a hybrid semantic representation. The hybrid semantic representation model proposed in this study was evaluated based on various English composition test sets. The findings demonstrate that the model presented in this paper exhibits superior accuracy and practical value compared to existing text representation methods.</p> ]]></content:encoded> <dc:title>A Hybrid Semantic Representation Method Based on Fusion Conceptual Knowledge and Weighted Word Embeddings for English Texts</dc:title> <dc:creator>Zan Qiu</dc:creator> <dc:creator>Guimin Huang</dc:creator> <dc:creator>Xingguo Qin</dc:creator> <dc:creator>Yabing Wang</dc:creator> <dc:creator>Jiahao Wang</dc:creator> <dc:creator>Ya Zhou</dc:creator> <dc:identifier>doi: 10.3390/info15110708</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-05</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-05</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>708</prism:startingPage> <prism:doi>10.3390/info15110708</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/708</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/706"> <title>Information, Vol. 15, Pages 706: Uncovering Key Factors That Drive the Impressions of Online Emerging Technology Narratives</title> <link>https://www.mdpi.com/2078-2489/15/11/706</link> <description>Social media platforms play a significant role in facilitating business decision making, especially in the context of emerging technologies. Such platforms offer a rich source of data from a global audience, which can provide organisations with insights into market trends, consumer behaviour, and attitudes towards specific technologies, as well as monitoring competitor activity. In the context of social media, such insights are conceptualised as immediate and real-time behavioural responses measured by likes, comments, and shares. To monitor such metrics, social media platforms have introduced tools that allow users to analyse and track the performance of their posts and understand their audience. However, the existing tools often overlook the impact of contextual features such as sentiment, URL inclusion, and specific word use. This paper presents a data-driven framework to identify and quantify the influence of such features on the visibility and impact of technology-related tweets. The quantitative analysis from statistical modelling reveals that certain content-based features, like the number of words and pronouns used, positively correlate with the impressions of tweets, with increases of up to 2.8%. Conversely, features such as the excessive use of hashtags, verbs, and complex sentences were found to decrease impressions significantly, with a notable reduction of 8.6% associated with tweets containing numerous trailing characters. Moreover, the study shows that tweets expressing negative sentiments tend to be more impressionable, likely due to a negativity bias that elicits stronger emotional responses and drives higher engagement and virality. Additionally, the sentiment associated with specific technologies also played a crucial role; positive sentiments linked to beneficial technologies like data science or machine learning significantly boosted impressions, while similar sentiments towards negatively viewed technologies like cyber threats reduced them. The inclusion of URLs in tweets also had a mixed impact on impressions&amp;mdash;enhancing engagement for general technology topics, but reducing it for sensitive subjects due to potential concerns over link safety. These findings underscore the importance of a strategic approach to social media content creation, emphasising the need for businesses to align their communication strategies, such as responding to shifts in user behaviours, new demands, and emerging uncertainties, with dynamic user engagement patterns.</description> <pubDate>2024-11-05</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 706: Uncovering Key Factors That Drive the Impressions of Online Emerging Technology Narratives</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/706">doi: 10.3390/info15110706</a></p> <p>Authors: Lowri Williams Eirini Anthi Pete Burnap </p> <p>Social media platforms play a significant role in facilitating business decision making, especially in the context of emerging technologies. Such platforms offer a rich source of data from a global audience, which can provide organisations with insights into market trends, consumer behaviour, and attitudes towards specific technologies, as well as monitoring competitor activity. In the context of social media, such insights are conceptualised as immediate and real-time behavioural responses measured by likes, comments, and shares. To monitor such metrics, social media platforms have introduced tools that allow users to analyse and track the performance of their posts and understand their audience. However, the existing tools often overlook the impact of contextual features such as sentiment, URL inclusion, and specific word use. This paper presents a data-driven framework to identify and quantify the influence of such features on the visibility and impact of technology-related tweets. The quantitative analysis from statistical modelling reveals that certain content-based features, like the number of words and pronouns used, positively correlate with the impressions of tweets, with increases of up to 2.8%. Conversely, features such as the excessive use of hashtags, verbs, and complex sentences were found to decrease impressions significantly, with a notable reduction of 8.6% associated with tweets containing numerous trailing characters. Moreover, the study shows that tweets expressing negative sentiments tend to be more impressionable, likely due to a negativity bias that elicits stronger emotional responses and drives higher engagement and virality. Additionally, the sentiment associated with specific technologies also played a crucial role; positive sentiments linked to beneficial technologies like data science or machine learning significantly boosted impressions, while similar sentiments towards negatively viewed technologies like cyber threats reduced them. The inclusion of URLs in tweets also had a mixed impact on impressions&amp;mdash;enhancing engagement for general technology topics, but reducing it for sensitive subjects due to potential concerns over link safety. These findings underscore the importance of a strategic approach to social media content creation, emphasising the need for businesses to align their communication strategies, such as responding to shifts in user behaviours, new demands, and emerging uncertainties, with dynamic user engagement patterns.</p> ]]></content:encoded> <dc:title>Uncovering Key Factors That Drive the Impressions of Online Emerging Technology Narratives</dc:title> <dc:creator>Lowri Williams</dc:creator> <dc:creator>Eirini Anthi</dc:creator> <dc:creator>Pete Burnap</dc:creator> <dc:identifier>doi: 10.3390/info15110706</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-05</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-05</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>706</prism:startingPage> <prism:doi>10.3390/info15110706</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/706</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/707"> <title>Information, Vol. 15, Pages 707: Is the Taiwan Stock Market (Swarm) Intelligent?</title> <link>https://www.mdpi.com/2078-2489/15/11/707</link> <description>It is well-believed that most trading activities tend to herd. Herding is an important topic in finance. It implies a violation of efficient markets and hence, suggests possibly predictable trading profits. However, it is hard to test such a hypothesis using aggregated data (as in the literature). In this paper, we obtain a proprietary data set that contains detailed trading information, and as a result, for the first time it allows us to validate this hypothesis. The data set contains all trades transacted in 2019 by all the brokers/dealers across all locations in Taiwan of all the equities (stocks, warrants, and ETFs). Given such data, in this paper, we use swarm intelligence to identify such herding behavior. In particular, we use two versions of swarm intelligence&amp;mdash;Boids and PSO (particle swarm optimization)&amp;mdash;to study the herding behavior. Our results indicate weak swarm among brokers/dealers.</description> <pubDate>2024-11-05</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 707: Is the Taiwan Stock Market (Swarm) Intelligent?</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/707">doi: 10.3390/info15110707</a></p> <p>Authors: Ren-Raw Chen </p> <p>It is well-believed that most trading activities tend to herd. Herding is an important topic in finance. It implies a violation of efficient markets and hence, suggests possibly predictable trading profits. However, it is hard to test such a hypothesis using aggregated data (as in the literature). In this paper, we obtain a proprietary data set that contains detailed trading information, and as a result, for the first time it allows us to validate this hypothesis. The data set contains all trades transacted in 2019 by all the brokers/dealers across all locations in Taiwan of all the equities (stocks, warrants, and ETFs). Given such data, in this paper, we use swarm intelligence to identify such herding behavior. In particular, we use two versions of swarm intelligence&amp;mdash;Boids and PSO (particle swarm optimization)&amp;mdash;to study the herding behavior. Our results indicate weak swarm among brokers/dealers.</p> ]]></content:encoded> <dc:title>Is the Taiwan Stock Market (Swarm) Intelligent?</dc:title> <dc:creator>Ren-Raw Chen</dc:creator> <dc:identifier>doi: 10.3390/info15110707</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-05</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-05</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>707</prism:startingPage> <prism:doi>10.3390/info15110707</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/707</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/705"> <title>Information, Vol. 15, Pages 705: Exploring Sentiment Analysis for the Indonesian Presidential Election Through Online Reviews Using Multi-Label Classification with a Deep Learning Algorithm</title> <link>https://www.mdpi.com/2078-2489/15/11/705</link> <description>Presidential elections are an important political event that often trigger intense debate. With more than 139 million users, YouTube serves as a significant platform for understanding public opinion through sentiment analysis. This study aimed to implement deep learning techniques for a multi-label sentiment analysis of comments on YouTube videos related to the 2024 Indonesian presidential election. Offering a fresh perspective compared to previous research that primarily employed traditional classification methods, this study classifies comments into eight emotional labels: anger, anticipation, disgust, joy, fear, sadness, surprise, and trust. By focusing on the emotional spectrum, this study provides a more nuanced understanding of public sentiment towards presidential candidates. The CRISP-DM method is applied, encompassing stages of business understanding, data understanding, data preparation, modeling, evaluation, and deployment, ensuring a systematic and comprehensive approach. This study employs a dataset comprising 32,000 comments, obtained via YouTube Data API, from the KPU and Najwa Shihab channels. The analysis is specifically centered on comments related to presidential candidate debates. Three deep learning models&amp;mdash;Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM), and a hybrid model combining CNN and Bi-LSTM&amp;mdash;are assessed using confusion matrix, Area Under the Curve (AUC), and Hamming loss metrics. The evaluation results demonstrate that the Bi-LSTM model achieved the highest accuracy with an AUC value of 0.91 and a Hamming loss of 0.08, indicating an excellent ability to classify sentiment with high precision and a low error rate. This innovative approach to multi-label sentiment analysis in the context of the 2024 Indonesian presidential election expands the insights into public sentiment towards candidates, offering valuable implications for political campaign strategies. Additionally, this research contributes to the fields of natural language processing and data mining by addressing the challenges associated with multi-label sentiment analysis.</description> <pubDate>2024-11-05</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 705: Exploring Sentiment Analysis for the Indonesian Presidential Election Through Online Reviews Using Multi-Label Classification with a Deep Learning Algorithm</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/705">doi: 10.3390/info15110705</a></p> <p>Authors: Ahmad Nahid Ma鈥檃ly Dita Pramesti Ariadani Dwi Fathurahman Hanif Fakhrurroja </p> <p>Presidential elections are an important political event that often trigger intense debate. With more than 139 million users, YouTube serves as a significant platform for understanding public opinion through sentiment analysis. This study aimed to implement deep learning techniques for a multi-label sentiment analysis of comments on YouTube videos related to the 2024 Indonesian presidential election. Offering a fresh perspective compared to previous research that primarily employed traditional classification methods, this study classifies comments into eight emotional labels: anger, anticipation, disgust, joy, fear, sadness, surprise, and trust. By focusing on the emotional spectrum, this study provides a more nuanced understanding of public sentiment towards presidential candidates. The CRISP-DM method is applied, encompassing stages of business understanding, data understanding, data preparation, modeling, evaluation, and deployment, ensuring a systematic and comprehensive approach. This study employs a dataset comprising 32,000 comments, obtained via YouTube Data API, from the KPU and Najwa Shihab channels. The analysis is specifically centered on comments related to presidential candidate debates. Three deep learning models&amp;mdash;Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM), and a hybrid model combining CNN and Bi-LSTM&amp;mdash;are assessed using confusion matrix, Area Under the Curve (AUC), and Hamming loss metrics. The evaluation results demonstrate that the Bi-LSTM model achieved the highest accuracy with an AUC value of 0.91 and a Hamming loss of 0.08, indicating an excellent ability to classify sentiment with high precision and a low error rate. This innovative approach to multi-label sentiment analysis in the context of the 2024 Indonesian presidential election expands the insights into public sentiment towards candidates, offering valuable implications for political campaign strategies. Additionally, this research contributes to the fields of natural language processing and data mining by addressing the challenges associated with multi-label sentiment analysis.</p> ]]></content:encoded> <dc:title>Exploring Sentiment Analysis for the Indonesian Presidential Election Through Online Reviews Using Multi-Label Classification with a Deep Learning Algorithm</dc:title> <dc:creator>Ahmad Nahid Ma鈥檃ly</dc:creator> <dc:creator>Dita Pramesti</dc:creator> <dc:creator>Ariadani Dwi Fathurahman</dc:creator> <dc:creator>Hanif Fakhrurroja</dc:creator> <dc:identifier>doi: 10.3390/info15110705</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-05</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-05</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>705</prism:startingPage> <prism:doi>10.3390/info15110705</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/705</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/704"> <title>Information, Vol. 15, Pages 704: Unsupervised Decision Trees for Axis Unimodal Clustering</title> <link>https://www.mdpi.com/2078-2489/15/11/704</link> <description>The use of decision trees for obtaining and representing clustering solutions is advantageous, due to their interpretability property. We propose a method called Decision Trees for Axis Unimodal Clustering (DTAUC), which constructs unsupervised binary decision trees for clustering by exploiting the concept of unimodality. Unimodality is a key property indicating the grouping behavior of data around a single density mode. Our approach is based on the notion of an axis unimodal cluster: a cluster where all features are unimodal, i.e., the set of values of each feature is unimodal as decided by a unimodality test. The proposed method follows the typical top-down splitting paradigm for building axis-aligned decision trees and aims to partition the initial dataset into axis unimodal clusters by applying thresholding on multimodal features. To determine the decision rule at each node, we propose a criterion that combines unimodality and separation. The method automatically terminates when all clusters are axis unimodal. Unlike typical decision tree methods, DTAUC does not require user-defined hyperparameters, such as maximum tree depth or the minimum number of points per leaf, except for the significance level of the unimodality test. Comparative experimental results on various synthetic and real datasets indicate the effectiveness of our method.</description> <pubDate>2024-11-05</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 704: Unsupervised Decision Trees for Axis Unimodal Clustering</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/704">doi: 10.3390/info15110704</a></p> <p>Authors: Paraskevi Chasani Aristidis Likas </p> <p>The use of decision trees for obtaining and representing clustering solutions is advantageous, due to their interpretability property. We propose a method called Decision Trees for Axis Unimodal Clustering (DTAUC), which constructs unsupervised binary decision trees for clustering by exploiting the concept of unimodality. Unimodality is a key property indicating the grouping behavior of data around a single density mode. Our approach is based on the notion of an axis unimodal cluster: a cluster where all features are unimodal, i.e., the set of values of each feature is unimodal as decided by a unimodality test. The proposed method follows the typical top-down splitting paradigm for building axis-aligned decision trees and aims to partition the initial dataset into axis unimodal clusters by applying thresholding on multimodal features. To determine the decision rule at each node, we propose a criterion that combines unimodality and separation. The method automatically terminates when all clusters are axis unimodal. Unlike typical decision tree methods, DTAUC does not require user-defined hyperparameters, such as maximum tree depth or the minimum number of points per leaf, except for the significance level of the unimodality test. Comparative experimental results on various synthetic and real datasets indicate the effectiveness of our method.</p> ]]></content:encoded> <dc:title>Unsupervised Decision Trees for Axis Unimodal Clustering</dc:title> <dc:creator>Paraskevi Chasani</dc:creator> <dc:creator>Aristidis Likas</dc:creator> <dc:identifier>doi: 10.3390/info15110704</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-05</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-05</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>704</prism:startingPage> <prism:doi>10.3390/info15110704</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/704</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/703"> <title>Information, Vol. 15, Pages 703: Exploring Perspectives of Blockchain Technology and Traditional Centralized Technology in Organ Donation Management: A Comprehensive Review</title> <link>https://www.mdpi.com/2078-2489/15/11/703</link> <description>Background/Objectives: The healthcare sector is rapidly growing, aiming to promote health, provide treatment, and enhance well-being. This paper focuses on the organ donation and transplantation system, a vital aspect of healthcare. It offers a comprehensive review of challenges in global organ donation and transplantation, highlighting issues of fairness and transparency, and compares centralized architecture-based models and blockchain-based decentralized models. Methods: This work reviews 370 publications from 2016 to 2023 on organ donation management systems. Out of these, 85 publications met the inclusion criteria, including 67 journal articles, 2 doctoral theses, and 16 conference papers. About 50.6% of these publications focus on global challenges in the system. Additionally, 12.9% of the publications examine centralized architecture-based models, and 36.5% of the publications explore blockchain-based decentralized models. Results: Concerns about organ trafficking, illicit trade, system distrust, and unethical allocation are highlighted, with a lack of transparency as the primary catalyst in organ donation and transplantation. It has been observed that centralized architecture-based models use technologies such as Python, Java, SQL, and Android Technology but face data storage issues. In contrast, blockchain-based decentralized models, mainly using Ethereum and a subset on Hyperledger Fabric, benefit from decentralized data storage, ensure transparency, and address these concerns efficiently. Conclusions: It has been observed that blockchain technology-based models are the better option for organ donation management systems. Further, suggestions for future directions for researchers in the field of organ donation management systems have been presented.</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 703: Exploring Perspectives of Blockchain Technology and Traditional Centralized Technology in Organ Donation Management: A Comprehensive Review</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/703">doi: 10.3390/info15110703</a></p> <p>Authors: Geet Bawa Harmeet Singh Sita Rani Aman Kataria Hong Min </p> <p>Background/Objectives: The healthcare sector is rapidly growing, aiming to promote health, provide treatment, and enhance well-being. This paper focuses on the organ donation and transplantation system, a vital aspect of healthcare. It offers a comprehensive review of challenges in global organ donation and transplantation, highlighting issues of fairness and transparency, and compares centralized architecture-based models and blockchain-based decentralized models. Methods: This work reviews 370 publications from 2016 to 2023 on organ donation management systems. Out of these, 85 publications met the inclusion criteria, including 67 journal articles, 2 doctoral theses, and 16 conference papers. About 50.6% of these publications focus on global challenges in the system. Additionally, 12.9% of the publications examine centralized architecture-based models, and 36.5% of the publications explore blockchain-based decentralized models. Results: Concerns about organ trafficking, illicit trade, system distrust, and unethical allocation are highlighted, with a lack of transparency as the primary catalyst in organ donation and transplantation. It has been observed that centralized architecture-based models use technologies such as Python, Java, SQL, and Android Technology but face data storage issues. In contrast, blockchain-based decentralized models, mainly using Ethereum and a subset on Hyperledger Fabric, benefit from decentralized data storage, ensure transparency, and address these concerns efficiently. Conclusions: It has been observed that blockchain technology-based models are the better option for organ donation management systems. Further, suggestions for future directions for researchers in the field of organ donation management systems have been presented.</p> ]]></content:encoded> <dc:title>Exploring Perspectives of Blockchain Technology and Traditional Centralized Technology in Organ Donation Management: A Comprehensive Review</dc:title> <dc:creator>Geet Bawa</dc:creator> <dc:creator>Harmeet Singh</dc:creator> <dc:creator>Sita Rani</dc:creator> <dc:creator>Aman Kataria</dc:creator> <dc:creator>Hong Min</dc:creator> <dc:identifier>doi: 10.3390/info15110703</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>703</prism:startingPage> <prism:doi>10.3390/info15110703</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/703</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/702"> <title>Information, Vol. 15, Pages 702: Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain&ndash;Computer Interfaces</title> <link>https://www.mdpi.com/2078-2489/15/11/702</link> <description>This paper advances real-time cursor control for individuals with motor impairments through a novel brain&amp;ndash;computer interface (BCI) system based solely on motor imagery. We introduce an enhanced deep neural network (DNN) classifier integrated with a Four-Class Iterative Filtering (FCIF) technique for efficient preprocessing of neural signals. The underlying approach is the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) and it utilizes a customized filter bank for robust feature extraction, thereby significantly improving signal quality and cursor control responsiveness. Extensive testing under varied conditions demonstrates that our system achieves an average classification accuracy of 89.1% and response times of 663 milliseconds, illustrating high precision in feature discrimination. Evaluations using metrics such as Recall, Precision, and F1-Score confirm the system&amp;rsquo;s effectiveness and accuracy in practical applications, making it a valuable tool for enhancing accessibility for individuals with motor disabilities.</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 702: Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain&ndash;Computer Interfaces</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/702">doi: 10.3390/info15110702</a></p> <p>Authors: Srinath Akuthota Ravi Chander Janapati K. Raj Kumar Vassilis C. Gerogiannis Andreas Kanavos Biswaranjan Acharya Foteini Grivokostopoulou Usha Desai </p> <p>This paper advances real-time cursor control for individuals with motor impairments through a novel brain&amp;ndash;computer interface (BCI) system based solely on motor imagery. We introduce an enhanced deep neural network (DNN) classifier integrated with a Four-Class Iterative Filtering (FCIF) technique for efficient preprocessing of neural signals. The underlying approach is the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) and it utilizes a customized filter bank for robust feature extraction, thereby significantly improving signal quality and cursor control responsiveness. Extensive testing under varied conditions demonstrates that our system achieves an average classification accuracy of 89.1% and response times of 663 milliseconds, illustrating high precision in feature discrimination. Evaluations using metrics such as Recall, Precision, and F1-Score confirm the system&amp;rsquo;s effectiveness and accuracy in practical applications, making it a valuable tool for enhancing accessibility for individuals with motor disabilities.</p> ]]></content:encoded> <dc:title>Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain&amp;ndash;Computer Interfaces</dc:title> <dc:creator>Srinath Akuthota</dc:creator> <dc:creator>Ravi Chander Janapati</dc:creator> <dc:creator>K. Raj Kumar</dc:creator> <dc:creator>Vassilis C. Gerogiannis</dc:creator> <dc:creator>Andreas Kanavos</dc:creator> <dc:creator>Biswaranjan Acharya</dc:creator> <dc:creator>Foteini Grivokostopoulou</dc:creator> <dc:creator>Usha Desai</dc:creator> <dc:identifier>doi: 10.3390/info15110702</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>702</prism:startingPage> <prism:doi>10.3390/info15110702</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/702</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/701"> <title>Information, Vol. 15, Pages 701: An Efficient Deep Learning Framework for Optimized Event Forecasting</title> <link>https://www.mdpi.com/2078-2489/15/11/701</link> <description>There have been several catastrophic events that have impacted multiple economies and resulted in thousands of fatalities, and violence has generated a severe political and financial crisis. Multiple studies have been centered around the artificial intelligence (AI) and machine learning (ML) approaches that are most widely used in practice to detect or forecast violent activities. However, machine learning algorithms become less accurate in identifying and forecasting violent activity as data volume and complexity increase. For the prediction of future events, we propose a hybrid deep learning (DL)-based model that is composed of a convolutional neural network (CNN), long short-term memory (LSTM), and an attention layer to learn temporal features from the benchmark the Global Terrorism Database (GTD). The GTD is an internationally recognized database that includes around 190,000 violent events and occurrences worldwide from 1970 to 2020. We took into account two factors for this experimental work: the type of event and the type of object used. The LSTM model takes these complex feature extractions from the CNN first to determine the chronological link between data points, whereas the attention model is used for the time series prediction of an event. The results show that the proposed model achieved good accuracies for both cases&amp;mdash;type of event and type of object&amp;mdash;compared to benchmark studies using the same dataset (98.1% and 97.6%, respectively).</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 701: An Efficient Deep Learning Framework for Optimized Event Forecasting</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/701">doi: 10.3390/info15110701</a></p> <p>Authors: Emad Ul Haq Qazi Muhammad Hamza Faheem Tanveer Zia Muhammad Imran Iftikhar Ahmad </p> <p>There have been several catastrophic events that have impacted multiple economies and resulted in thousands of fatalities, and violence has generated a severe political and financial crisis. Multiple studies have been centered around the artificial intelligence (AI) and machine learning (ML) approaches that are most widely used in practice to detect or forecast violent activities. However, machine learning algorithms become less accurate in identifying and forecasting violent activity as data volume and complexity increase. For the prediction of future events, we propose a hybrid deep learning (DL)-based model that is composed of a convolutional neural network (CNN), long short-term memory (LSTM), and an attention layer to learn temporal features from the benchmark the Global Terrorism Database (GTD). The GTD is an internationally recognized database that includes around 190,000 violent events and occurrences worldwide from 1970 to 2020. We took into account two factors for this experimental work: the type of event and the type of object used. The LSTM model takes these complex feature extractions from the CNN first to determine the chronological link between data points, whereas the attention model is used for the time series prediction of an event. The results show that the proposed model achieved good accuracies for both cases&amp;mdash;type of event and type of object&amp;mdash;compared to benchmark studies using the same dataset (98.1% and 97.6%, respectively).</p> ]]></content:encoded> <dc:title>An Efficient Deep Learning Framework for Optimized Event Forecasting</dc:title> <dc:creator>Emad Ul Haq Qazi</dc:creator> <dc:creator>Muhammad Hamza Faheem</dc:creator> <dc:creator>Tanveer Zia</dc:creator> <dc:creator>Muhammad Imran</dc:creator> <dc:creator>Iftikhar Ahmad</dc:creator> <dc:identifier>doi: 10.3390/info15110701</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>701</prism:startingPage> <prism:doi>10.3390/info15110701</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/701</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/700"> <title>Information, Vol. 15, Pages 700: AI Impact on Hotel Guest Satisfaction via Tailor-Made Services: A Case Study of Serbia and Hungary</title> <link>https://www.mdpi.com/2078-2489/15/11/700</link> <description>This study examines the level of implementation of artificial intelligence (AI) in the personalization of hotel services and its impact on guest satisfaction through an analysis of tourists&amp;rsquo; attitudes and behaviors The focus of the research is on how personalized recommendations for food and beverages, activities, and room services, delivered by trustworthy AI systems, digital experience, and the perception of privacy and data security, influence overall guest satisfaction. The research was conducted in Serbia and Hungary, using structural models to assess and analyze direct and indirect effects. The results show that AI personalization significantly contributes to guest satisfaction, with mediating variables such as trust in AI systems and technological experience playing a key role. A comparative analysis highlights differences between Hungary, a member of the European Union, and Serbia, a country in transition, shedding light on specific regulatory frameworks and cultural preferences in these countries.</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 700: AI Impact on Hotel Guest Satisfaction via Tailor-Made Services: A Case Study of Serbia and Hungary</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/700">doi: 10.3390/info15110700</a></p> <p>Authors: Ranko Makivi膰 Dragan Vukoli膰 Sonja Veljovi膰 Minja Bolesnikov L贸r谩nt D茅nes D谩vid Andrea Ivani拧evi膰 Mario Sili膰 Tamara Gaji膰 </p> <p>This study examines the level of implementation of artificial intelligence (AI) in the personalization of hotel services and its impact on guest satisfaction through an analysis of tourists&amp;rsquo; attitudes and behaviors The focus of the research is on how personalized recommendations for food and beverages, activities, and room services, delivered by trustworthy AI systems, digital experience, and the perception of privacy and data security, influence overall guest satisfaction. The research was conducted in Serbia and Hungary, using structural models to assess and analyze direct and indirect effects. The results show that AI personalization significantly contributes to guest satisfaction, with mediating variables such as trust in AI systems and technological experience playing a key role. A comparative analysis highlights differences between Hungary, a member of the European Union, and Serbia, a country in transition, shedding light on specific regulatory frameworks and cultural preferences in these countries.</p> ]]></content:encoded> <dc:title>AI Impact on Hotel Guest Satisfaction via Tailor-Made Services: A Case Study of Serbia and Hungary</dc:title> <dc:creator>Ranko Makivi膰</dc:creator> <dc:creator>Dragan Vukoli膰</dc:creator> <dc:creator>Sonja Veljovi膰</dc:creator> <dc:creator>Minja Bolesnikov</dc:creator> <dc:creator>L贸r谩nt D茅nes D谩vid</dc:creator> <dc:creator>Andrea Ivani拧evi膰</dc:creator> <dc:creator>Mario Sili膰</dc:creator> <dc:creator>Tamara Gaji膰</dc:creator> <dc:identifier>doi: 10.3390/info15110700</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>700</prism:startingPage> <prism:doi>10.3390/info15110700</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/700</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/699"> <title>Information, Vol. 15, Pages 699: Domain Adaptive Urban Garbage Detection Based on Attention and Confidence Fusion</title> <link>https://www.mdpi.com/2078-2489/15/11/699</link> <description>To overcome the challenges posed by limited garbage datasets and the laborious nature of data labeling in urban garbage object detection, we propose an innovative unsupervised domain adaptation approach to detecting garbage objects in urban aerial images. The proposed method leverages a detector, initially trained on source domain images, to generate pseudo-labels for target domain images. By employing an attention and confidence fusion strategy, images from both source and target domains can be seamlessly integrated, thereby enabling the detector to incrementally adapt to target domain scenarios while preserving its detection efficacy in the source domain. This approach mitigates the performance degradation caused by domain discrepancies, significantly enhancing the model&amp;rsquo;s adaptability. The proposed method was validated on a self-constructed urban garbage dataset. Experimental results demonstrate its superior performance over baseline models. Furthermore, we extended the proposed mixing method to other typical scenarios and conducted comprehensive experiments on four well-known public datasets: Cityscapes, KITTI, Sim10k, and Foggy Cityscapes. The result shows that the proposed method exhibits remarkable effectiveness and adaptability across diverse datasets.</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 699: Domain Adaptive Urban Garbage Detection Based on Attention and Confidence Fusion</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/699">doi: 10.3390/info15110699</a></p> <p>Authors: Tianlong Yuan Jietao Lin Keyong Hu Wenqian Chen Yifan Hu </p> <p>To overcome the challenges posed by limited garbage datasets and the laborious nature of data labeling in urban garbage object detection, we propose an innovative unsupervised domain adaptation approach to detecting garbage objects in urban aerial images. The proposed method leverages a detector, initially trained on source domain images, to generate pseudo-labels for target domain images. By employing an attention and confidence fusion strategy, images from both source and target domains can be seamlessly integrated, thereby enabling the detector to incrementally adapt to target domain scenarios while preserving its detection efficacy in the source domain. This approach mitigates the performance degradation caused by domain discrepancies, significantly enhancing the model&amp;rsquo;s adaptability. The proposed method was validated on a self-constructed urban garbage dataset. Experimental results demonstrate its superior performance over baseline models. Furthermore, we extended the proposed mixing method to other typical scenarios and conducted comprehensive experiments on four well-known public datasets: Cityscapes, KITTI, Sim10k, and Foggy Cityscapes. The result shows that the proposed method exhibits remarkable effectiveness and adaptability across diverse datasets.</p> ]]></content:encoded> <dc:title>Domain Adaptive Urban Garbage Detection Based on Attention and Confidence Fusion</dc:title> <dc:creator>Tianlong Yuan</dc:creator> <dc:creator>Jietao Lin</dc:creator> <dc:creator>Keyong Hu</dc:creator> <dc:creator>Wenqian Chen</dc:creator> <dc:creator>Yifan Hu</dc:creator> <dc:identifier>doi: 10.3390/info15110699</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>699</prism:startingPage> <prism:doi>10.3390/info15110699</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/699</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/698"> <title>Information, Vol. 15, Pages 698: Correlations and Fractality in Sentence-Level Sentiment Analysis Based on VADER for Literary Texts</title> <link>https://www.mdpi.com/2078-2489/15/11/698</link> <description>We perform a sentence-level sentiment analysis study of different literary texts in English language. Each text is converted into a series in which the data points are the sentiment value of each sentence obtained using the sentiment analysis tool (VADER). By applying the Detrended Fluctuation Analysis (DFA) and the Higuchi Fractal Dimension (HFD) methods to these sentiment series, we find that they are monofractal with long-term correlations, which can be explained by the fact that the writing process has memory by construction, with a sentiment evolution that is self-similar. Furthermore, we discretize these series by applying a classification approach which transforms the series into a one on which each data point has only three possible values, corresponding to positive, neutral or negative sentiments. We map these three-states series to a Markov chain and investigate the transitions of sentiment from one sentence to the next, obtaining a state transition matrix for each book that provides information on the probability of transitioning between sentiments from one sentence to the next. This approach shows that there are biases towards increasing the probability of switching to neutral or positive sentences. The two approaches supplement each other, since the long-term correlation approach allows a global assessment of the sentiment of the book, while the state transition matrix approach provides local information about the sentiment evolution along the text.</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 698: Correlations and Fractality in Sentence-Level Sentiment Analysis Based on VADER for Literary Texts</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/698">doi: 10.3390/info15110698</a></p> <p>Authors: Ricardo Hern谩ndez-P茅rez Pablo Lara-Mart铆nez Bibiana Obreg贸n-Quintana Larry S. Liebovitch Lev Guzm谩n-Vargas </p> <p>We perform a sentence-level sentiment analysis study of different literary texts in English language. Each text is converted into a series in which the data points are the sentiment value of each sentence obtained using the sentiment analysis tool (VADER). By applying the Detrended Fluctuation Analysis (DFA) and the Higuchi Fractal Dimension (HFD) methods to these sentiment series, we find that they are monofractal with long-term correlations, which can be explained by the fact that the writing process has memory by construction, with a sentiment evolution that is self-similar. Furthermore, we discretize these series by applying a classification approach which transforms the series into a one on which each data point has only three possible values, corresponding to positive, neutral or negative sentiments. We map these three-states series to a Markov chain and investigate the transitions of sentiment from one sentence to the next, obtaining a state transition matrix for each book that provides information on the probability of transitioning between sentiments from one sentence to the next. This approach shows that there are biases towards increasing the probability of switching to neutral or positive sentences. The two approaches supplement each other, since the long-term correlation approach allows a global assessment of the sentiment of the book, while the state transition matrix approach provides local information about the sentiment evolution along the text.</p> ]]></content:encoded> <dc:title>Correlations and Fractality in Sentence-Level Sentiment Analysis Based on VADER for Literary Texts</dc:title> <dc:creator>Ricardo Hern谩ndez-P茅rez</dc:creator> <dc:creator>Pablo Lara-Mart铆nez</dc:creator> <dc:creator>Bibiana Obreg贸n-Quintana</dc:creator> <dc:creator>Larry S. Liebovitch</dc:creator> <dc:creator>Lev Guzm谩n-Vargas</dc:creator> <dc:identifier>doi: 10.3390/info15110698</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>698</prism:startingPage> <prism:doi>10.3390/info15110698</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/698</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/697"> <title>Information, Vol. 15, Pages 697: Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review</title> <link>https://www.mdpi.com/2078-2489/15/11/697</link> <description>Generative AI, including large language models (LLMs), has transformed the paradigm of data generation and creative content, but this progress raises critical privacy concerns, especially when models are trained on sensitive data. This review provides a comprehensive overview of privacy-preserving techniques aimed at safeguarding data privacy in generative AI, such as differential privacy (DP), federated learning (FL), homomorphic encryption (HE), and secure multi-party computation (SMPC). These techniques mitigate risks like model inversion, data leakage, and membership inference attacks, which are particularly relevant to LLMs. Additionally, the review explores emerging solutions, including privacy-enhancing technologies and post-quantum cryptography, as future directions for enhancing privacy in generative AI systems. Recognizing that achieving absolute privacy is mathematically impossible, the review emphasizes the necessity of aligning technical safeguards with legal and regulatory frameworks to ensure compliance with data protection laws. By discussing the ethical and legal implications of privacy risks in generative AI, the review underscores the need for a balanced approach that considers performance, scalability, and privacy preservation. The findings highlight the need for ongoing research and innovation to develop privacy-preserving techniques that keep pace with the scaling of generative AI, especially in large language models, while adhering to regulatory and ethical standards.</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 697: Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/697">doi: 10.3390/info15110697</a></p> <p>Authors: Georgios Feretzakis Konstantinos Papaspyridis Aris Gkoulalas-Divanis Vassilios S. Verykios </p> <p>Generative AI, including large language models (LLMs), has transformed the paradigm of data generation and creative content, but this progress raises critical privacy concerns, especially when models are trained on sensitive data. This review provides a comprehensive overview of privacy-preserving techniques aimed at safeguarding data privacy in generative AI, such as differential privacy (DP), federated learning (FL), homomorphic encryption (HE), and secure multi-party computation (SMPC). These techniques mitigate risks like model inversion, data leakage, and membership inference attacks, which are particularly relevant to LLMs. Additionally, the review explores emerging solutions, including privacy-enhancing technologies and post-quantum cryptography, as future directions for enhancing privacy in generative AI systems. Recognizing that achieving absolute privacy is mathematically impossible, the review emphasizes the necessity of aligning technical safeguards with legal and regulatory frameworks to ensure compliance with data protection laws. By discussing the ethical and legal implications of privacy risks in generative AI, the review underscores the need for a balanced approach that considers performance, scalability, and privacy preservation. The findings highlight the need for ongoing research and innovation to develop privacy-preserving techniques that keep pace with the scaling of generative AI, especially in large language models, while adhering to regulatory and ethical standards.</p> ]]></content:encoded> <dc:title>Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review</dc:title> <dc:creator>Georgios Feretzakis</dc:creator> <dc:creator>Konstantinos Papaspyridis</dc:creator> <dc:creator>Aris Gkoulalas-Divanis</dc:creator> <dc:creator>Vassilios S. Verykios</dc:creator> <dc:identifier>doi: 10.3390/info15110697</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>697</prism:startingPage> <prism:doi>10.3390/info15110697</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/697</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/696"> <title>Information, Vol. 15, Pages 696: Evaluating Feature Impact Prior to Phylogenetic Analysis Using Machine Learning Techniques</title> <link>https://www.mdpi.com/2078-2489/15/11/696</link> <description>The purpose of this paper is to describe a feature selection algorithm and its application to enhance the accuracy of the reconstruction of phylogenetic trees by improving the efficiency of tree construction. Applying machine learning models for Arabic and Aramaic scripts, such as deep neural networks (DNNs), support vector machines (SVMs), and random forests (RFs), each model was used to compare the phylogenies. The methodology was applied to a dataset containing Arabic and Aramaic scripts, demonstrating its relevance in a range of phylogenetic analyses. The results emphasize that feature selection by DNNs, their essential role, outperforms other models in terms of area under the curve (AUC) and equal error rate (EER) across various datasets and fold sizes. Furthermore, both SVM and RF models are valuable for understanding the strengths and limitations of these approaches in the context of phylogenetic analysis This method not only simplifies the tree structures but also enhances their Consistency Index values. Therefore, they offer a robust framework for evolutionary studies. The findings highlight the application of machine learning in phylogenetics, suggesting a path toward accurate and efficient evolutionary analyses and enabling a deeper understanding of evolutionary relationships.</description> <pubDate>2024-11-04</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 696: Evaluating Feature Impact Prior to Phylogenetic Analysis Using Machine Learning Techniques</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/696">doi: 10.3390/info15110696</a></p> <p>Authors: Osama A. Salman G谩bor Hossz煤 </p> <p>The purpose of this paper is to describe a feature selection algorithm and its application to enhance the accuracy of the reconstruction of phylogenetic trees by improving the efficiency of tree construction. Applying machine learning models for Arabic and Aramaic scripts, such as deep neural networks (DNNs), support vector machines (SVMs), and random forests (RFs), each model was used to compare the phylogenies. The methodology was applied to a dataset containing Arabic and Aramaic scripts, demonstrating its relevance in a range of phylogenetic analyses. The results emphasize that feature selection by DNNs, their essential role, outperforms other models in terms of area under the curve (AUC) and equal error rate (EER) across various datasets and fold sizes. Furthermore, both SVM and RF models are valuable for understanding the strengths and limitations of these approaches in the context of phylogenetic analysis This method not only simplifies the tree structures but also enhances their Consistency Index values. Therefore, they offer a robust framework for evolutionary studies. The findings highlight the application of machine learning in phylogenetics, suggesting a path toward accurate and efficient evolutionary analyses and enabling a deeper understanding of evolutionary relationships.</p> ]]></content:encoded> <dc:title>Evaluating Feature Impact Prior to Phylogenetic Analysis Using Machine Learning Techniques</dc:title> <dc:creator>Osama A. Salman</dc:creator> <dc:creator>G谩bor Hossz煤</dc:creator> <dc:identifier>doi: 10.3390/info15110696</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-04</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-04</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>696</prism:startingPage> <prism:doi>10.3390/info15110696</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/696</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/694"> <title>Information, Vol. 15, Pages 694: Consumer Satisfaction Benchmarking Analysis Using Group Decision Support System (GDSS) PROMETHEE Methodology in a GIS Environment</title> <link>https://www.mdpi.com/2078-2489/15/11/694</link> <description>In today&amp;rsquo;s competitive environment, multi-branch companies allocate their stores with the aim of expanding their territorial coverage to attract new customers and increase their market share. Consumer satisfaction surveys either produce global performance results or they are not able to differentiate consumer perceptions using location analytics. This research develops a novel framework to assist multi-branch companies in mapping the consumer satisfaction performance of their stores, expanding conventional customer relationship management to the spatial context. The framework developed proposes a decision model that combines the Group Decision Support extension of the PROMETHEE and CRITIC methods in a GIS environment to generate satisfaction performance mappings. The developed decision-making framework converts consumer responses into satisfaction performance maps, allowing the company&amp;rsquo;s stores and their competitors to be evaluated. Moreover, it provides insight into the potential opportunities and threats for each store. The performance of the proposed framework is highlighted through a case study involving a multi-branch coffeehouse company in a Greek city. Finally, a tool developed to assist the computational part of the framework is presented.</description> <pubDate>2024-11-03</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 694: Consumer Satisfaction Benchmarking Analysis Using Group Decision Support System (GDSS) PROMETHEE Methodology in a GIS Environment</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/694">doi: 10.3390/info15110694</a></p> <p>Authors: Anastasia S. Saridou Athanasios P. Vavatsikos </p> <p>In today&amp;rsquo;s competitive environment, multi-branch companies allocate their stores with the aim of expanding their territorial coverage to attract new customers and increase their market share. Consumer satisfaction surveys either produce global performance results or they are not able to differentiate consumer perceptions using location analytics. This research develops a novel framework to assist multi-branch companies in mapping the consumer satisfaction performance of their stores, expanding conventional customer relationship management to the spatial context. The framework developed proposes a decision model that combines the Group Decision Support extension of the PROMETHEE and CRITIC methods in a GIS environment to generate satisfaction performance mappings. The developed decision-making framework converts consumer responses into satisfaction performance maps, allowing the company&amp;rsquo;s stores and their competitors to be evaluated. Moreover, it provides insight into the potential opportunities and threats for each store. The performance of the proposed framework is highlighted through a case study involving a multi-branch coffeehouse company in a Greek city. Finally, a tool developed to assist the computational part of the framework is presented.</p> ]]></content:encoded> <dc:title>Consumer Satisfaction Benchmarking Analysis Using Group Decision Support System (GDSS) PROMETHEE Methodology in a GIS Environment</dc:title> <dc:creator>Anastasia S. Saridou</dc:creator> <dc:creator>Athanasios P. Vavatsikos</dc:creator> <dc:identifier>doi: 10.3390/info15110694</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-03</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-03</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>694</prism:startingPage> <prism:doi>10.3390/info15110694</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/694</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/695"> <title>Information, Vol. 15, Pages 695: Improving Consumer Health Search with Field-Level Learning-to-Rank Techniques</title> <link>https://www.mdpi.com/2078-2489/15/11/695</link> <description>In the area of consumer health search (CHS), there is an increasing concern about returning topically relevant and understandable health information to the user. Besides being used to rank topically relevant documents, Learning to Rank (LTR) has also been used to promote understandability ranking. Traditionally, features coming from different document fields are joined together, limiting the performance of standard LTR, since field information plays an important role in promoting understandability ranking. In this paper, a novel field-level Learning-to-Rank (f-LTR) approach is proposed, and its application in CHS is investigated by developing thorough experiments on CLEF&amp;rsquo; 2016&amp;ndash;2018 eHealth IR data collections. An in-depth analysis of the effects of using f-LTR is provided, with experimental results suggesting that in LTR, title features are more effective than other field features in promoting understandability ranking. Moreover, the fused f-LTR model is compared to existing work, confirming the effectiveness of the methodology.</description> <pubDate>2024-11-03</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 695: Improving Consumer Health Search with Field-Level Learning-to-Rank Techniques</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/695">doi: 10.3390/info15110695</a></p> <p>Authors: Hua Yang Teresa Gon莽alves </p> <p>In the area of consumer health search (CHS), there is an increasing concern about returning topically relevant and understandable health information to the user. Besides being used to rank topically relevant documents, Learning to Rank (LTR) has also been used to promote understandability ranking. Traditionally, features coming from different document fields are joined together, limiting the performance of standard LTR, since field information plays an important role in promoting understandability ranking. In this paper, a novel field-level Learning-to-Rank (f-LTR) approach is proposed, and its application in CHS is investigated by developing thorough experiments on CLEF&amp;rsquo; 2016&amp;ndash;2018 eHealth IR data collections. An in-depth analysis of the effects of using f-LTR is provided, with experimental results suggesting that in LTR, title features are more effective than other field features in promoting understandability ranking. Moreover, the fused f-LTR model is compared to existing work, confirming the effectiveness of the methodology.</p> ]]></content:encoded> <dc:title>Improving Consumer Health Search with Field-Level Learning-to-Rank Techniques</dc:title> <dc:creator>Hua Yang</dc:creator> <dc:creator>Teresa Gon莽alves</dc:creator> <dc:identifier>doi: 10.3390/info15110695</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-03</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-03</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>695</prism:startingPage> <prism:doi>10.3390/info15110695</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/695</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/693"> <title>Information, Vol. 15, Pages 693: Leveraging Social Media for Stakeholder Engagement: A Case on the Ship Management Industry</title> <link>https://www.mdpi.com/2078-2489/15/11/693</link> <description>Social media is an important driver of firm success by providing an avenue for stakeholder engagement. Operating in a highly complex and competitive environment, firms in the ship management industry can utilise social media platforms to engage with their stakeholders, which can enhance stakeholder satisfaction and loyalty. However, stakeholder engagement rates can vary, with some posts generating more engagement than others. Drawing on the perceived value and word-of-mouth psychological motivation theories, this study introduces a theoretical model to identify and examine factors influencing stakeholder engagement on LinkedIn in the ship management industry. A hierarchical regression analysis is conducted on the posts of ten ship management firms to study the influence of content type and message characteristics variables on engagement rates. The results revealed nine variables that can significantly influence stakeholder engagement. They are links, corporate brand names, call-to-actions, message length, tangible resources, social content, emotional content, first-person texts, and emojis. The findings provide recommendations for firms in the ship management industry in terms of the message strategies to incorporate into their posts to encourage higher engagement rates. This study also enriches literature for stakeholder engagement on social media.</description> <pubDate>2024-11-03</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 693: Leveraging Social Media for Stakeholder Engagement: A Case on the Ship Management Industry</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/693">doi: 10.3390/info15110693</a></p> <p>Authors: Kum Fai Yuen Jun Da Lee Cam Tu Nguyen Xueqin Wang </p> <p>Social media is an important driver of firm success by providing an avenue for stakeholder engagement. Operating in a highly complex and competitive environment, firms in the ship management industry can utilise social media platforms to engage with their stakeholders, which can enhance stakeholder satisfaction and loyalty. However, stakeholder engagement rates can vary, with some posts generating more engagement than others. Drawing on the perceived value and word-of-mouth psychological motivation theories, this study introduces a theoretical model to identify and examine factors influencing stakeholder engagement on LinkedIn in the ship management industry. A hierarchical regression analysis is conducted on the posts of ten ship management firms to study the influence of content type and message characteristics variables on engagement rates. The results revealed nine variables that can significantly influence stakeholder engagement. They are links, corporate brand names, call-to-actions, message length, tangible resources, social content, emotional content, first-person texts, and emojis. The findings provide recommendations for firms in the ship management industry in terms of the message strategies to incorporate into their posts to encourage higher engagement rates. This study also enriches literature for stakeholder engagement on social media.</p> ]]></content:encoded> <dc:title>Leveraging Social Media for Stakeholder Engagement: A Case on the Ship Management Industry</dc:title> <dc:creator>Kum Fai Yuen</dc:creator> <dc:creator>Jun Da Lee</dc:creator> <dc:creator>Cam Tu Nguyen</dc:creator> <dc:creator>Xueqin Wang</dc:creator> <dc:identifier>doi: 10.3390/info15110693</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-03</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-03</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>693</prism:startingPage> <prism:doi>10.3390/info15110693</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/693</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/692"> <title>Information, Vol. 15, Pages 692: Quantum Marine Predator Algorithm: A Quantum Leap in Photovoltaic Efficiency Under Dynamic Conditions</title> <link>https://www.mdpi.com/2078-2489/15/11/692</link> <description>The Quantum Marine Predator Algorithm (QMPA) presents a groundbreaking solution to the inherent limitations of conventional Maximum Power Point Tracking (MPPT) techniques in photovoltaic systems. These limitations, such as sluggish response times and inadequate adaptability to environmental fluctuations, are particularly pronounced in regions with challenging weather patterns like Sunderland. QMPA emerges as a formidable contender by seamlessly integrating the sophisticated hunting tactics of marine predators with the principles of quantum mechanics. This amalgamation not only enhances operational efficiency but also addresses the need for real-time adaptability. One of the most striking advantages of QMPA is its remarkable improvement in response time and adaptability. Compared to traditional MPPT methods, which often struggle to keep pace with rapidly changing environmental factors, QMPA demonstrates a significant reduction in response time, resulting in up to a 30% increase in efficiency under fluctuating irradiance conditions for a resistive load of 100 &amp;Omega;. These findings are derived from extensive experimentation using NASA&amp;rsquo;s worldwide power prediction data. Through a detailed comparative analysis with existing MPPT methodologies, QMPA consistently outperforms its counterparts, exhibiting superior operational efficiency and stability across varying environmental scenarios. By substantiating its claims with concrete data and measurable improvements, this research transcends generic assertions and establishes QMPA as a tangible advancement in MPPT technology.</description> <pubDate>2024-11-03</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 692: Quantum Marine Predator Algorithm: A Quantum Leap in Photovoltaic Efficiency Under Dynamic Conditions</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/692">doi: 10.3390/info15110692</a></p> <p>Authors: Okba Fergani Yassine Himeur Raihane Mechgoug Shadi Atalla Wathiq Mansoor Nacira Tkouti </p> <p>The Quantum Marine Predator Algorithm (QMPA) presents a groundbreaking solution to the inherent limitations of conventional Maximum Power Point Tracking (MPPT) techniques in photovoltaic systems. These limitations, such as sluggish response times and inadequate adaptability to environmental fluctuations, are particularly pronounced in regions with challenging weather patterns like Sunderland. QMPA emerges as a formidable contender by seamlessly integrating the sophisticated hunting tactics of marine predators with the principles of quantum mechanics. This amalgamation not only enhances operational efficiency but also addresses the need for real-time adaptability. One of the most striking advantages of QMPA is its remarkable improvement in response time and adaptability. Compared to traditional MPPT methods, which often struggle to keep pace with rapidly changing environmental factors, QMPA demonstrates a significant reduction in response time, resulting in up to a 30% increase in efficiency under fluctuating irradiance conditions for a resistive load of 100 &amp;Omega;. These findings are derived from extensive experimentation using NASA&amp;rsquo;s worldwide power prediction data. Through a detailed comparative analysis with existing MPPT methodologies, QMPA consistently outperforms its counterparts, exhibiting superior operational efficiency and stability across varying environmental scenarios. By substantiating its claims with concrete data and measurable improvements, this research transcends generic assertions and establishes QMPA as a tangible advancement in MPPT technology.</p> ]]></content:encoded> <dc:title>Quantum Marine Predator Algorithm: A Quantum Leap in Photovoltaic Efficiency Under Dynamic Conditions</dc:title> <dc:creator>Okba Fergani</dc:creator> <dc:creator>Yassine Himeur</dc:creator> <dc:creator>Raihane Mechgoug</dc:creator> <dc:creator>Shadi Atalla</dc:creator> <dc:creator>Wathiq Mansoor</dc:creator> <dc:creator>Nacira Tkouti</dc:creator> <dc:identifier>doi: 10.3390/info15110692</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-03</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-03</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>692</prism:startingPage> <prism:doi>10.3390/info15110692</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/692</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/690"> <title>Information, Vol. 15, Pages 690: Public Health Using Social Network Analysis During the COVID-19 Era: A Systematic Review</title> <link>https://www.mdpi.com/2078-2489/15/11/690</link> <description>Social network analysis (SNA), or the application of network analysis techniques to social media data, is an increasingly prominent approach used in computational public health research. We conducted a systematic review to investigate trends around SNA applied to social media data for public health and epidemiology while outlining existing ethical practices. Following PRISMA guidelines, we reviewed articles from Web of Science and PubMed published between January 2019 and February 2024, leading to a total of 51 papers surveyed. The majority of analyzed research (69%) involved studying Twitter/X, followed by Sina Weibo (16%). The most prominent topics in this timeframe were related to COVID-19, while other papers explored public health topics such as citizen science, public emergencies, behavior change, and various medical conditions. We surveyed the methodological approaches and network characteristics commonly employed in public health SNA studies, finding that most studies applied only basic network metrics and algorithms such as layout, community detection, and standard centrality measures. We highlight the ethical concerns related to the use of social media data, such as privacy and consent, underscoring the potential of integrating ethical SNA with more inclusive, human-centered practices to enhance the effectiveness and community buy-in of emerging computational public health efforts.</description> <pubDate>2024-11-02</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 690: Public Health Using Social Network Analysis During the COVID-19 Era: A Systematic Review</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/690">doi: 10.3390/info15110690</a></p> <p>Authors: Stanislava Gardasevic Aditi Jaiswal Manika Lamba Jena Funakoshi Kar-Hai Chu Aekta Shah Yinan Sun Pallav Pokhrel Peter Washington </p> <p>Social network analysis (SNA), or the application of network analysis techniques to social media data, is an increasingly prominent approach used in computational public health research. We conducted a systematic review to investigate trends around SNA applied to social media data for public health and epidemiology while outlining existing ethical practices. Following PRISMA guidelines, we reviewed articles from Web of Science and PubMed published between January 2019 and February 2024, leading to a total of 51 papers surveyed. The majority of analyzed research (69%) involved studying Twitter/X, followed by Sina Weibo (16%). The most prominent topics in this timeframe were related to COVID-19, while other papers explored public health topics such as citizen science, public emergencies, behavior change, and various medical conditions. We surveyed the methodological approaches and network characteristics commonly employed in public health SNA studies, finding that most studies applied only basic network metrics and algorithms such as layout, community detection, and standard centrality measures. We highlight the ethical concerns related to the use of social media data, such as privacy and consent, underscoring the potential of integrating ethical SNA with more inclusive, human-centered practices to enhance the effectiveness and community buy-in of emerging computational public health efforts.</p> ]]></content:encoded> <dc:title>Public Health Using Social Network Analysis During the COVID-19 Era: A Systematic Review</dc:title> <dc:creator>Stanislava Gardasevic</dc:creator> <dc:creator>Aditi Jaiswal</dc:creator> <dc:creator>Manika Lamba</dc:creator> <dc:creator>Jena Funakoshi</dc:creator> <dc:creator>Kar-Hai Chu</dc:creator> <dc:creator>Aekta Shah</dc:creator> <dc:creator>Yinan Sun</dc:creator> <dc:creator>Pallav Pokhrel</dc:creator> <dc:creator>Peter Washington</dc:creator> <dc:identifier>doi: 10.3390/info15110690</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-02</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-02</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>690</prism:startingPage> <prism:doi>10.3390/info15110690</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/690</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/691"> <title>Information, Vol. 15, Pages 691: Comprehensive Review and Future Research Directions on ICT Standardisation</title> <link>https://www.mdpi.com/2078-2489/15/11/691</link> <description>Standardisation has become imperative to retaining order and development in modern society. The simplest actions, such as train timings and the width of the railroad, would be very difficult to achieve without Standardisation. Standardisation also solves problems, such as the use of mobile devices, which requires travel abroad when out of range. We perform a large-scale quantitative analysis for papers dealing with (1) standards and (2) Information and communications technology (ICT) data in three important databases, namely Web of Science, IEEE Explore, and ACM digital library, in this paper. These three databases presented 216 articles that were divided into five categories: standard-related review and survey studies, information management across hardware and software standards, energy management standards, machine learning model classification performance, privacy-aware software system standards, and health information and communications technology standards. This paper discusses how Standardisation facilitates the planning of the entire research and innovation process by encouraging discussions regarding the specific outputs the research aims to achieve. The paper further illustrates that references related to standardisation within the call topics act as a crucial motivating factor in the decision to adopt standardisation. In conclusion, our contribution provides a better understanding of standards in peer-reviewed publications and an essential foundation for future research. In addition, we demonstrate that standards play an important role in innovation.</description> <pubDate>2024-11-02</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 691: Comprehensive Review and Future Research Directions on ICT Standardisation</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/691">doi: 10.3390/info15110691</a></p> <p>Authors: Mohammed Najah Mahdi Ray Walshe Sharon Farrell Harshvardhan J. Pandit </p> <p>Standardisation has become imperative to retaining order and development in modern society. The simplest actions, such as train timings and the width of the railroad, would be very difficult to achieve without Standardisation. Standardisation also solves problems, such as the use of mobile devices, which requires travel abroad when out of range. We perform a large-scale quantitative analysis for papers dealing with (1) standards and (2) Information and communications technology (ICT) data in three important databases, namely Web of Science, IEEE Explore, and ACM digital library, in this paper. These three databases presented 216 articles that were divided into five categories: standard-related review and survey studies, information management across hardware and software standards, energy management standards, machine learning model classification performance, privacy-aware software system standards, and health information and communications technology standards. This paper discusses how Standardisation facilitates the planning of the entire research and innovation process by encouraging discussions regarding the specific outputs the research aims to achieve. The paper further illustrates that references related to standardisation within the call topics act as a crucial motivating factor in the decision to adopt standardisation. In conclusion, our contribution provides a better understanding of standards in peer-reviewed publications and an essential foundation for future research. In addition, we demonstrate that standards play an important role in innovation.</p> ]]></content:encoded> <dc:title>Comprehensive Review and Future Research Directions on ICT Standardisation</dc:title> <dc:creator>Mohammed Najah Mahdi</dc:creator> <dc:creator>Ray Walshe</dc:creator> <dc:creator>Sharon Farrell</dc:creator> <dc:creator>Harshvardhan J. Pandit</dc:creator> <dc:identifier>doi: 10.3390/info15110691</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-02</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-02</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>691</prism:startingPage> <prism:doi>10.3390/info15110691</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/691</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/689"> <title>Information, Vol. 15, Pages 689: Stacking Ensemble Technique Using Optimized Machine Learning Models with Boruta&ndash;XGBoost Feature Selection for Landslide Susceptibility Mapping: A Case of Kermanshah Province, Iran</title> <link>https://www.mdpi.com/2078-2489/15/11/689</link> <description>Landslides cause significant human and financial losses in different regions of the world. A high-accuracy landslide susceptibility map (LSM) is required to reduce the adverse effects of landslides. Machine learning (ML) is a robust tool for LSM creation. ML models require large amounts of data to predict landslides accurately. This study has developed a stacking ensemble technique based on ML and optimization to enhance the accuracy of an LSM while considering small datasets. The Boruta&amp;ndash;XGBoost feature selection was used to determine the optimal combination of features. Then, an intelligent and accurate analysis was performed to prepare the LSM using a dynamic and hybrid approach based on the Adaptive Fuzzy Inference System (ANFIS), Extreme Learning Machine (ELM), Support Vector Regression (SVR), and new optimization algorithms (Ladybug Beetle Optimization [LBO] and Electric Eel Foraging Optimization [EEFO]). After model optimization, a stacking ensemble learning technique was used to weight the models and combine the model outputs to increase the accuracy and reliability of the LSM. The weight combinations of the models were optimized using LBO and EEFO. The Root Mean Square Error (RMSE) and Area Under the Receiver Operating Characteristic Curve (AUC-ROC) parameters were used to assess the performance of these models. A landslide dataset from Kermanshah province, Iran, and 17 influencing factors were used to evaluate the proposed approach. Landslide inventory was 116 points, and the combined Voronoi and entropy method was applied for non-landslide point sampling. The results showed higher accuracy from the stacking ensemble technique with EEFO and LBO algorithms with AUC-ROC values of 94.81% and 94.84% and RMSE values of 0.3146 and 0.3142, respectively. The proposed approach can help managers and planners prepare accurate and reliable LSMs and, as a result, reduce the human and financial losses associated with landslide events.</description> <pubDate>2024-11-02</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 689: Stacking Ensemble Technique Using Optimized Machine Learning Models with Boruta&ndash;XGBoost Feature Selection for Landslide Susceptibility Mapping: A Case of Kermanshah Province, Iran</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/689">doi: 10.3390/info15110689</a></p> <p>Authors: Zeynab Yousefi Ali Asghar Alesheikh Ali Jafari Sara Torktatari Mohammad Sharif </p> <p>Landslides cause significant human and financial losses in different regions of the world. A high-accuracy landslide susceptibility map (LSM) is required to reduce the adverse effects of landslides. Machine learning (ML) is a robust tool for LSM creation. ML models require large amounts of data to predict landslides accurately. This study has developed a stacking ensemble technique based on ML and optimization to enhance the accuracy of an LSM while considering small datasets. The Boruta&amp;ndash;XGBoost feature selection was used to determine the optimal combination of features. Then, an intelligent and accurate analysis was performed to prepare the LSM using a dynamic and hybrid approach based on the Adaptive Fuzzy Inference System (ANFIS), Extreme Learning Machine (ELM), Support Vector Regression (SVR), and new optimization algorithms (Ladybug Beetle Optimization [LBO] and Electric Eel Foraging Optimization [EEFO]). After model optimization, a stacking ensemble learning technique was used to weight the models and combine the model outputs to increase the accuracy and reliability of the LSM. The weight combinations of the models were optimized using LBO and EEFO. The Root Mean Square Error (RMSE) and Area Under the Receiver Operating Characteristic Curve (AUC-ROC) parameters were used to assess the performance of these models. A landslide dataset from Kermanshah province, Iran, and 17 influencing factors were used to evaluate the proposed approach. Landslide inventory was 116 points, and the combined Voronoi and entropy method was applied for non-landslide point sampling. The results showed higher accuracy from the stacking ensemble technique with EEFO and LBO algorithms with AUC-ROC values of 94.81% and 94.84% and RMSE values of 0.3146 and 0.3142, respectively. The proposed approach can help managers and planners prepare accurate and reliable LSMs and, as a result, reduce the human and financial losses associated with landslide events.</p> ]]></content:encoded> <dc:title>Stacking Ensemble Technique Using Optimized Machine Learning Models with Boruta&amp;ndash;XGBoost Feature Selection for Landslide Susceptibility Mapping: A Case of Kermanshah Province, Iran</dc:title> <dc:creator>Zeynab Yousefi</dc:creator> <dc:creator>Ali Asghar Alesheikh</dc:creator> <dc:creator>Ali Jafari</dc:creator> <dc:creator>Sara Torktatari</dc:creator> <dc:creator>Mohammad Sharif</dc:creator> <dc:identifier>doi: 10.3390/info15110689</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-02</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-02</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>689</prism:startingPage> <prism:doi>10.3390/info15110689</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/689</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/688"> <title>Information, Vol. 15, Pages 688: Geospatial Analysis of the Association Between Medicaid Expansion, Minimum Wage Policies, and Alzheimer&rsquo;s Disease Dementia Prevalence in the United States</title> <link>https://www.mdpi.com/2078-2489/15/11/688</link> <description>Previous studies indicate that increased healthcare access through Medicaid expansion and alleviation of socioeconomic stressors via higher minimum wages improved health outcomes. This study investigates the spatial relationships between the Medicaid expansion, minimum wage policy, and Alzheimer&amp;rsquo;s Disease (AD) dementia prevalence across the US. We used county-level AD dementia prevalence adjusted for age, sex, race/ethnicity, and education. Social Vulnerability Index (SVI) data, Medicaid expansion status, and state minimum wage law status were incorporated from CDC, Kaiser Family Foundation, and US Department of Labor sources, respectively. We employed the Getis-Ord Gi* statistic to identify hotspots and cold spots of AD dementia prevalence at the county level. We compared these locations with the overall SVI scores using univariate analyses. We also assessed the proportion of hot and cold spots at the state level based on Medicaid expansion and minimum wage status using the logistic regression model. The most vulnerable SVI quartile (Q4) had the highest number of hotspots (n = 311, 64.8%), while the least vulnerable quartile (Q1) had the fewest hotspots (n = 22, 4.6%) (&amp;chi;2 = 307.41, p &amp;lt; 0.01). States that adopted Medicaid expansion had a significantly lower proportion of hotspots compared to non-adopting states (p &amp;lt; 0.05), and the non-adopting states had significantly higher odds of having hotspots than adopting states (OR = 2.58, 95% CI: 2.04&amp;ndash;3.26, p &amp;lt; 0.001). Conversely, the non-adopting states had significantly lower odds of having cold spots compared to the adopting states (OR = 0.24, 95% CI: 0.19&amp;ndash;0.32, p &amp;lt; 0.01). States with minimum wage levels at or below the federal level showed significantly higher odds of having hotspots than states with a minimum wage above the federal level (OR = 1.94, 95% CI: 1.51&amp;ndash;2.49, p &amp;lt; 0.01). Our findings suggest significant disparities in AD dementia prevalence related to socioeconomic and policy factors and lay the groundwork for future causal analyses.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 688: Geospatial Analysis of the Association Between Medicaid Expansion, Minimum Wage Policies, and Alzheimer&rsquo;s Disease Dementia Prevalence in the United States</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/688">doi: 10.3390/info15110688</a></p> <p>Authors: Abolfazl Mollalo Sara Knox Jessica Meng Andreana Benitez Leslie A. Lenert Alexander V. Alekseyenko </p> <p>Previous studies indicate that increased healthcare access through Medicaid expansion and alleviation of socioeconomic stressors via higher minimum wages improved health outcomes. This study investigates the spatial relationships between the Medicaid expansion, minimum wage policy, and Alzheimer&amp;rsquo;s Disease (AD) dementia prevalence across the US. We used county-level AD dementia prevalence adjusted for age, sex, race/ethnicity, and education. Social Vulnerability Index (SVI) data, Medicaid expansion status, and state minimum wage law status were incorporated from CDC, Kaiser Family Foundation, and US Department of Labor sources, respectively. We employed the Getis-Ord Gi* statistic to identify hotspots and cold spots of AD dementia prevalence at the county level. We compared these locations with the overall SVI scores using univariate analyses. We also assessed the proportion of hot and cold spots at the state level based on Medicaid expansion and minimum wage status using the logistic regression model. The most vulnerable SVI quartile (Q4) had the highest number of hotspots (n = 311, 64.8%), while the least vulnerable quartile (Q1) had the fewest hotspots (n = 22, 4.6%) (&amp;chi;2 = 307.41, p &amp;lt; 0.01). States that adopted Medicaid expansion had a significantly lower proportion of hotspots compared to non-adopting states (p &amp;lt; 0.05), and the non-adopting states had significantly higher odds of having hotspots than adopting states (OR = 2.58, 95% CI: 2.04&amp;ndash;3.26, p &amp;lt; 0.001). Conversely, the non-adopting states had significantly lower odds of having cold spots compared to the adopting states (OR = 0.24, 95% CI: 0.19&amp;ndash;0.32, p &amp;lt; 0.01). States with minimum wage levels at or below the federal level showed significantly higher odds of having hotspots than states with a minimum wage above the federal level (OR = 1.94, 95% CI: 1.51&amp;ndash;2.49, p &amp;lt; 0.01). Our findings suggest significant disparities in AD dementia prevalence related to socioeconomic and policy factors and lay the groundwork for future causal analyses.</p> ]]></content:encoded> <dc:title>Geospatial Analysis of the Association Between Medicaid Expansion, Minimum Wage Policies, and Alzheimer&amp;rsquo;s Disease Dementia Prevalence in the United States</dc:title> <dc:creator>Abolfazl Mollalo</dc:creator> <dc:creator>Sara Knox</dc:creator> <dc:creator>Jessica Meng</dc:creator> <dc:creator>Andreana Benitez</dc:creator> <dc:creator>Leslie A. Lenert</dc:creator> <dc:creator>Alexander V. Alekseyenko</dc:creator> <dc:identifier>doi: 10.3390/info15110688</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>688</prism:startingPage> <prism:doi>10.3390/info15110688</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/688</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/687"> <title>Information, Vol. 15, Pages 687: Mitigating Bias Due to Race and Gender in Machine Learning Predictions of Traffic Stop Outcomes</title> <link>https://www.mdpi.com/2078-2489/15/11/687</link> <description>Traffic stops represent a crucial point of interaction between citizens and law enforcement, with potential implications for bias and discrimination. This study performs a rigorously validated comparative machine learning model analysis, creating artificial intelligence (AI) technologies to predict the results of traffic stops using a dataset sourced from the Montgomery County Maryland Data Centre, focusing on variables such as driver demographics, violation types, and stop outcomes. We repeated our rigorous validation of AI for the creation of models that predict outcomes with and without race and with and without gender informing the model. Feature selection employed regularly selects for gender and race as a predictor variable. We also observed correlations between model performance and both race and gender. While these findings imply the existence of discrimination based on race and gender, our large-scale analysis (&amp;gt;600,000 samples) demonstrates the ability to produce top performing models that are gender and race agnostic, implying the potential to create technology that can help mitigate bias in traffic stops. The findings encourage the need for unbiased data and robust algorithms to address biases in law enforcement practices and enhance public trust in AI technologies deployed in this domain.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 687: Mitigating Bias Due to Race and Gender in Machine Learning Predictions of Traffic Stop Outcomes</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/687">doi: 10.3390/info15110687</a></p> <p>Authors: Kevin Saville Derek Berger Jacob Levman </p> <p>Traffic stops represent a crucial point of interaction between citizens and law enforcement, with potential implications for bias and discrimination. This study performs a rigorously validated comparative machine learning model analysis, creating artificial intelligence (AI) technologies to predict the results of traffic stops using a dataset sourced from the Montgomery County Maryland Data Centre, focusing on variables such as driver demographics, violation types, and stop outcomes. We repeated our rigorous validation of AI for the creation of models that predict outcomes with and without race and with and without gender informing the model. Feature selection employed regularly selects for gender and race as a predictor variable. We also observed correlations between model performance and both race and gender. While these findings imply the existence of discrimination based on race and gender, our large-scale analysis (&amp;gt;600,000 samples) demonstrates the ability to produce top performing models that are gender and race agnostic, implying the potential to create technology that can help mitigate bias in traffic stops. The findings encourage the need for unbiased data and robust algorithms to address biases in law enforcement practices and enhance public trust in AI technologies deployed in this domain.</p> ]]></content:encoded> <dc:title>Mitigating Bias Due to Race and Gender in Machine Learning Predictions of Traffic Stop Outcomes</dc:title> <dc:creator>Kevin Saville</dc:creator> <dc:creator>Derek Berger</dc:creator> <dc:creator>Jacob Levman</dc:creator> <dc:identifier>doi: 10.3390/info15110687</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>687</prism:startingPage> <prism:doi>10.3390/info15110687</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/687</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/686"> <title>Information, Vol. 15, Pages 686: Support of Migrant Reception, Integration, and Social Inclusion by Intelligent Technologies</title> <link>https://www.mdpi.com/2078-2489/15/11/686</link> <description>Apart from being an economic struggle, migration is first of all a societal challenge; most migrants come from different cultural and social contexts, do not speak the language of the host country, and are not familiar with its societal, administrative, and labour market infrastructure. This leaves them in need of dedicated personal assistance during their reception and integration. However, due to the continuously high number of people in need of attendance, public administrations and non-governmental organizations are often overstrained by this task. The objective of the Welcome Platform is to address the most pressing needs of migrants. The Platform incorporates advanced Embodied Conversational Agent and Virtual Reality technologies to support migrants in the context of reception, integration, and social inclusion in the host country. It has been successfully evaluated in trials with migrants in three European countries in view of potentially deviating needs at the municipal, regional, and national levels, respectively: the City of Hamm in Germany, Catalonia in Spain, and Greece. The results show that intelligent technologies can be a valuable supplementary tool for reducing the workload of personnel involved in migrant reception, integration, and inclusion.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 686: Support of Migrant Reception, Integration, and Social Inclusion by Intelligent Technologies</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/686">doi: 10.3390/info15110686</a></p> <p>Authors: Leo Wanner Daniel Bowen Marta Burgos Ester Carrasco Jan 膶ernock媒 Toni Codina Jevgenijs Danilins Steffi Davey Joan de Lara Eleni Dimopoulou Ekaterina Egorova Christine Gebhard Jens Grivolla Elena Jaramillo-Rojas Matthias Klusch Athanasios Mavropoulos Maria Moudatsou Artemisia Nikolaidou Dimos Ntioudis Irene Rodr铆guez Mirela Rosgova Yash Shekhawat Alexander Shvets Oleksandr Sobko Grigoris Tzionis Stefanos Vrochidis </p> <p>Apart from being an economic struggle, migration is first of all a societal challenge; most migrants come from different cultural and social contexts, do not speak the language of the host country, and are not familiar with its societal, administrative, and labour market infrastructure. This leaves them in need of dedicated personal assistance during their reception and integration. However, due to the continuously high number of people in need of attendance, public administrations and non-governmental organizations are often overstrained by this task. The objective of the Welcome Platform is to address the most pressing needs of migrants. The Platform incorporates advanced Embodied Conversational Agent and Virtual Reality technologies to support migrants in the context of reception, integration, and social inclusion in the host country. It has been successfully evaluated in trials with migrants in three European countries in view of potentially deviating needs at the municipal, regional, and national levels, respectively: the City of Hamm in Germany, Catalonia in Spain, and Greece. The results show that intelligent technologies can be a valuable supplementary tool for reducing the workload of personnel involved in migrant reception, integration, and inclusion.</p> ]]></content:encoded> <dc:title>Support of Migrant Reception, Integration, and Social Inclusion by Intelligent Technologies</dc:title> <dc:creator>Leo Wanner</dc:creator> <dc:creator>Daniel Bowen</dc:creator> <dc:creator>Marta Burgos</dc:creator> <dc:creator>Ester Carrasco</dc:creator> <dc:creator>Jan 膶ernock媒</dc:creator> <dc:creator>Toni Codina</dc:creator> <dc:creator>Jevgenijs Danilins</dc:creator> <dc:creator>Steffi Davey</dc:creator> <dc:creator>Joan de Lara</dc:creator> <dc:creator>Eleni Dimopoulou</dc:creator> <dc:creator>Ekaterina Egorova</dc:creator> <dc:creator>Christine Gebhard</dc:creator> <dc:creator>Jens Grivolla</dc:creator> <dc:creator>Elena Jaramillo-Rojas</dc:creator> <dc:creator>Matthias Klusch</dc:creator> <dc:creator>Athanasios Mavropoulos</dc:creator> <dc:creator>Maria Moudatsou</dc:creator> <dc:creator>Artemisia Nikolaidou</dc:creator> <dc:creator>Dimos Ntioudis</dc:creator> <dc:creator>Irene Rodr铆guez</dc:creator> <dc:creator>Mirela Rosgova</dc:creator> <dc:creator>Yash Shekhawat</dc:creator> <dc:creator>Alexander Shvets</dc:creator> <dc:creator>Oleksandr Sobko</dc:creator> <dc:creator>Grigoris Tzionis</dc:creator> <dc:creator>Stefanos Vrochidis</dc:creator> <dc:identifier>doi: 10.3390/info15110686</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>686</prism:startingPage> <prism:doi>10.3390/info15110686</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/686</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/685"> <title>Information, Vol. 15, Pages 685: Elegante: A Machine Learning-Based Threads Configuration Tool for SpMV Computations on Shared Memory Architecture</title> <link>https://www.mdpi.com/2078-2489/15/11/685</link> <description>The sparse matrix&amp;ndash;vector product (SpMV) is a fundamental computational kernel utilized in a diverse range of scientific and engineering applications. It is commonly used to solve linear and partial differential equations. The parallel computation of the SpMV product is a challenging task. Existing solutions often employ a fixed number of threads assignment to rows based on empirical formulas, leading to sub-optimal configurations and significant performance losses. Elegante, our proposed machine learning-powered tool, utilizes a data-driven approach to identify the optimal thread configuration for SpMV computations within a shared memory architecture. It accomplishes this by predicting the best thread configuration based on the unique sparsity pattern of each sparse matrix. Our approach involves training and testing using various base and ensemble machine learning algorithms such as decision tree, random forest, gradient boosting, logistic regression, and support vector machine. We rigorously experimented with a dataset of nearly 1000+ real-world matrices. These matrices originated from 46 distinct application domains, spanning fields like robotics, power networks, 2D/3D meshing, and computational fluid dynamics. Our proposed methodology achieved 62% of the highest achievable performance and is 7.33 times faster, demonstrating a significant disparity from the default OpenMP configuration policy and traditional practice methods of manually or randomly selecting the number of threads. This work is the first attempt where the structure of the matrix is used to predict the optimal thread configuration for the optimization of parallel SpMV computation in a shared memory environment.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 685: Elegante: A Machine Learning-Based Threads Configuration Tool for SpMV Computations on Shared Memory Architecture</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/685">doi: 10.3390/info15110685</a></p> <p>Authors: Muhammad Ahmad Usman Sardar Ildar Batyrshin Muhammad Hasnain Khan Sajid Grigori Sidorov </p> <p>The sparse matrix&amp;ndash;vector product (SpMV) is a fundamental computational kernel utilized in a diverse range of scientific and engineering applications. It is commonly used to solve linear and partial differential equations. The parallel computation of the SpMV product is a challenging task. Existing solutions often employ a fixed number of threads assignment to rows based on empirical formulas, leading to sub-optimal configurations and significant performance losses. Elegante, our proposed machine learning-powered tool, utilizes a data-driven approach to identify the optimal thread configuration for SpMV computations within a shared memory architecture. It accomplishes this by predicting the best thread configuration based on the unique sparsity pattern of each sparse matrix. Our approach involves training and testing using various base and ensemble machine learning algorithms such as decision tree, random forest, gradient boosting, logistic regression, and support vector machine. We rigorously experimented with a dataset of nearly 1000+ real-world matrices. These matrices originated from 46 distinct application domains, spanning fields like robotics, power networks, 2D/3D meshing, and computational fluid dynamics. Our proposed methodology achieved 62% of the highest achievable performance and is 7.33 times faster, demonstrating a significant disparity from the default OpenMP configuration policy and traditional practice methods of manually or randomly selecting the number of threads. This work is the first attempt where the structure of the matrix is used to predict the optimal thread configuration for the optimization of parallel SpMV computation in a shared memory environment.</p> ]]></content:encoded> <dc:title>Elegante: A Machine Learning-Based Threads Configuration Tool for SpMV Computations on Shared Memory Architecture</dc:title> <dc:creator>Muhammad Ahmad</dc:creator> <dc:creator>Usman Sardar</dc:creator> <dc:creator>Ildar Batyrshin</dc:creator> <dc:creator>Muhammad Hasnain</dc:creator> <dc:creator>Khan Sajid</dc:creator> <dc:creator>Grigori Sidorov</dc:creator> <dc:identifier>doi: 10.3390/info15110685</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>685</prism:startingPage> <prism:doi>10.3390/info15110685</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/685</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/684"> <title>Information, Vol. 15, Pages 684: Implementation of a Reduced Decoding Algorithm Complexity for Quasi-Cyclic Split-Row Threshold Low-Density Parity-Check Decoders</title> <link>https://www.mdpi.com/2078-2489/15/11/684</link> <description>We propose two decoding algorithms for quasi-cyclic LDPC codes (QC-LDPC) and implement the more efficient one in this paper. These algorithms depend on the split row for the layered decoding method applied to the Min-Sum (MS) algorithm. We designate the first algorithm &amp;ldquo;Split-Row Layered Min-Sum&amp;rdquo; (SRLMS), and the second algorithm &amp;ldquo;Split-Row Threshold Layered Min-Sum&amp;rdquo; (SRTLMS). A threshold message passes from one partition to another in SRTLMS, minimizing the gap from the MS and achieving a binary error rate of 3 &amp;times; 10&amp;minus;5 with Imax = 4 as the maximum number of iterations, resulting in a decrease of 0.25 dB. The simulation&amp;rsquo;s findings indicate that the SRTLMS is the most efficient variant decoding algorithm for LDPC codes, thanks to its compromise between performance and complexity. This paper presents the two invented algorithms and a comprehensive study of the co-design and implementation of the SRTLMS algorithm. We executed the implementation on a Xilinx Kintex-7 XC7K160 FPGA, achieving a maximum operating frequency of 101 MHz and a throughput of 606 Mbps.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 684: Implementation of a Reduced Decoding Algorithm Complexity for Quasi-Cyclic Split-Row Threshold Low-Density Parity-Check Decoders</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/684">doi: 10.3390/info15110684</a></p> <p>Authors: Bilal Mejmaa Chakir Aqil Ismail Akharraz Abdelaziz Ahaitouf </p> <p>We propose two decoding algorithms for quasi-cyclic LDPC codes (QC-LDPC) and implement the more efficient one in this paper. These algorithms depend on the split row for the layered decoding method applied to the Min-Sum (MS) algorithm. We designate the first algorithm &amp;ldquo;Split-Row Layered Min-Sum&amp;rdquo; (SRLMS), and the second algorithm &amp;ldquo;Split-Row Threshold Layered Min-Sum&amp;rdquo; (SRTLMS). A threshold message passes from one partition to another in SRTLMS, minimizing the gap from the MS and achieving a binary error rate of 3 &amp;times; 10&amp;minus;5 with Imax = 4 as the maximum number of iterations, resulting in a decrease of 0.25 dB. The simulation&amp;rsquo;s findings indicate that the SRTLMS is the most efficient variant decoding algorithm for LDPC codes, thanks to its compromise between performance and complexity. This paper presents the two invented algorithms and a comprehensive study of the co-design and implementation of the SRTLMS algorithm. We executed the implementation on a Xilinx Kintex-7 XC7K160 FPGA, achieving a maximum operating frequency of 101 MHz and a throughput of 606 Mbps.</p> ]]></content:encoded> <dc:title>Implementation of a Reduced Decoding Algorithm Complexity for Quasi-Cyclic Split-Row Threshold Low-Density Parity-Check Decoders</dc:title> <dc:creator>Bilal Mejmaa</dc:creator> <dc:creator>Chakir Aqil</dc:creator> <dc:creator>Ismail Akharraz</dc:creator> <dc:creator>Abdelaziz Ahaitouf</dc:creator> <dc:identifier>doi: 10.3390/info15110684</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>684</prism:startingPage> <prism:doi>10.3390/info15110684</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/684</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/683"> <title>Information, Vol. 15, Pages 683: Improving Search Query Accuracy for Specialized Websites Through Intelligent Text Correction and Reconstruction Models</title> <link>https://www.mdpi.com/2078-2489/15/11/683</link> <description>In the digital era, the need for precise and efficient search operations is paramount as users increasingly rely on online resources to access specific information. However, search accuracy is often hindered by errors in user queries, such as incomplete or degraded input. Errors in search queries can reduce both the precision and speed of search results, making error correction a key factor in enhancing the user experience. This paper addresses the challenge of improving search performance through query error correction. We propose a novel methodology and architecture aimed at optimizing search results across thematic websites, such as those for universities, hospitals, or tourism agencies. The proposed solution leverages an intelligent model based on Gated Recurrent Units (GRUs) and Bahdanau Attention mechanisms to reconstruct erroneous or incomplete text in search queries. To validate our approach, we embedded the model in a prototype website consolidating data from multiple universities, demonstrating significant improvements in search accuracy and efficiency.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 683: Improving Search Query Accuracy for Specialized Websites Through Intelligent Text Correction and Reconstruction Models</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/683">doi: 10.3390/info15110683</a></p> <p>Authors: Dana Simian Marin-Eusebiu 葮erban </p> <p>In the digital era, the need for precise and efficient search operations is paramount as users increasingly rely on online resources to access specific information. However, search accuracy is often hindered by errors in user queries, such as incomplete or degraded input. Errors in search queries can reduce both the precision and speed of search results, making error correction a key factor in enhancing the user experience. This paper addresses the challenge of improving search performance through query error correction. We propose a novel methodology and architecture aimed at optimizing search results across thematic websites, such as those for universities, hospitals, or tourism agencies. The proposed solution leverages an intelligent model based on Gated Recurrent Units (GRUs) and Bahdanau Attention mechanisms to reconstruct erroneous or incomplete text in search queries. To validate our approach, we embedded the model in a prototype website consolidating data from multiple universities, demonstrating significant improvements in search accuracy and efficiency.</p> ]]></content:encoded> <dc:title>Improving Search Query Accuracy for Specialized Websites Through Intelligent Text Correction and Reconstruction Models</dc:title> <dc:creator>Dana Simian</dc:creator> <dc:creator>Marin-Eusebiu 葮erban</dc:creator> <dc:identifier>doi: 10.3390/info15110683</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>683</prism:startingPage> <prism:doi>10.3390/info15110683</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/683</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/682"> <title>Information, Vol. 15, Pages 682: Geopolitical Ramifications of Cybersecurity Threats: State Responses and International Cooperations in the Digital Warfare Era</title> <link>https://www.mdpi.com/2078-2489/15/11/682</link> <description>As the digital environment progresses, the complexities of cyber threats also advance, encompassing both hostile cyberattacks and sophisticated cyber espionage. In the face of these difficulties, cooperative endeavours between state and non-state actors have attracted considerable interest as crucial elements in improving global cyber resilience. This study examines cybersecurity governance&amp;rsquo;s evolving dynamics, specifically exploring non-state actors&amp;rsquo; roles and their effects on global security. This highlights the increasing dangers presented by supply chain attacks, advanced persistent threats, ransomware, and vulnerabilities on the Internet of Things. Furthermore, it explores how non-state actors, such as terrorist organisations and armed groups, increasingly utilise cyberspace for strategic objectives. This issue can pose a challenge to conventional state-focused approaches to security management. Moreover, the research examines the crucial influence of informal governance processes on forming international cybersecurity regulations. The study emphasises the need for increased cooperation between governmental and non-governmental entities to create robust and flexible cybersecurity measures. This statement urges policymakers, security experts, and researchers to thoroughly examine the complex relationship between geopolitics, informal governance systems, and growing cyber threats to strengthen global digital resilience.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 682: Geopolitical Ramifications of Cybersecurity Threats: State Responses and International Cooperations in the Digital Warfare Era</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/682">doi: 10.3390/info15110682</a></p> <p>Authors: Aisha Adeyeri Hossein Abroshan </p> <p>As the digital environment progresses, the complexities of cyber threats also advance, encompassing both hostile cyberattacks and sophisticated cyber espionage. In the face of these difficulties, cooperative endeavours between state and non-state actors have attracted considerable interest as crucial elements in improving global cyber resilience. This study examines cybersecurity governance&amp;rsquo;s evolving dynamics, specifically exploring non-state actors&amp;rsquo; roles and their effects on global security. This highlights the increasing dangers presented by supply chain attacks, advanced persistent threats, ransomware, and vulnerabilities on the Internet of Things. Furthermore, it explores how non-state actors, such as terrorist organisations and armed groups, increasingly utilise cyberspace for strategic objectives. This issue can pose a challenge to conventional state-focused approaches to security management. Moreover, the research examines the crucial influence of informal governance processes on forming international cybersecurity regulations. The study emphasises the need for increased cooperation between governmental and non-governmental entities to create robust and flexible cybersecurity measures. This statement urges policymakers, security experts, and researchers to thoroughly examine the complex relationship between geopolitics, informal governance systems, and growing cyber threats to strengthen global digital resilience.</p> ]]></content:encoded> <dc:title>Geopolitical Ramifications of Cybersecurity Threats: State Responses and International Cooperations in the Digital Warfare Era</dc:title> <dc:creator>Aisha Adeyeri</dc:creator> <dc:creator>Hossein Abroshan</dc:creator> <dc:identifier>doi: 10.3390/info15110682</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>682</prism:startingPage> <prism:doi>10.3390/info15110682</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/682</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/681"> <title>Information, Vol. 15, Pages 681: Identifying Learners&rsquo; Confusion in a MOOC Forum Across Domains Using Explainable Deep Transfer Learning</title> <link>https://www.mdpi.com/2078-2489/15/11/681</link> <description>Massive Open Online Courses (MOOCs) offer highly specialized online courses and have attracted nearly 10 million learners worldwide to participate in various educational programs. These platforms provide discussion forums that allow learners to engage with both their peers and instructors, facilitating idea exchange and seeking assistance, respectively. However, due to the substantial participant-to-instructor ratio, certain posts may go unanswered. Addressing learners&amp;rsquo; confusion is crucial. This emotional state, often experienced during the learning journey, necessitates prompt support to prevent potential dropouts. This paper proposes the application of a deep transfer learning method to automate the classification of online discussion posts based on indicators of confusion utilizing the Stanford MOOCPost dataset. The approach involves creating an explainable and adaptable deep learning model through network-based transfer learning across multiple educational domains. This model outperforms baseline methods, achieving an average accuracy of 91%. Additionally, employing data augmentation techniques enhances the model&amp;rsquo;s generalizability, resulting in an 11% improvement in the F1 score. To mitigate the inherent opacity of the implemented models, Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation techniques are integrated. These explanations assess the reliability of features and provide supplementary insights into the confusion detection. By pinpointing confused posts, this work assists instructors in delivering timely responses, resolving learner confusion, providing accurate visualization of key contributing words, and reducing the dropout rate. This proactive approach ensures a smoother continuation of the learning process, consequently enhancing learner satisfaction with the educational experience.</description> <pubDate>2024-11-01</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 681: Identifying Learners&rsquo; Confusion in a MOOC Forum Across Domains Using Explainable Deep Transfer Learning</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/681">doi: 10.3390/info15110681</a></p> <p>Authors: Rahaf Alsuhaimi Omaima Almatrafi </p> <p>Massive Open Online Courses (MOOCs) offer highly specialized online courses and have attracted nearly 10 million learners worldwide to participate in various educational programs. These platforms provide discussion forums that allow learners to engage with both their peers and instructors, facilitating idea exchange and seeking assistance, respectively. However, due to the substantial participant-to-instructor ratio, certain posts may go unanswered. Addressing learners&amp;rsquo; confusion is crucial. This emotional state, often experienced during the learning journey, necessitates prompt support to prevent potential dropouts. This paper proposes the application of a deep transfer learning method to automate the classification of online discussion posts based on indicators of confusion utilizing the Stanford MOOCPost dataset. The approach involves creating an explainable and adaptable deep learning model through network-based transfer learning across multiple educational domains. This model outperforms baseline methods, achieving an average accuracy of 91%. Additionally, employing data augmentation techniques enhances the model&amp;rsquo;s generalizability, resulting in an 11% improvement in the F1 score. To mitigate the inherent opacity of the implemented models, Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation techniques are integrated. These explanations assess the reliability of features and provide supplementary insights into the confusion detection. By pinpointing confused posts, this work assists instructors in delivering timely responses, resolving learner confusion, providing accurate visualization of key contributing words, and reducing the dropout rate. This proactive approach ensures a smoother continuation of the learning process, consequently enhancing learner satisfaction with the educational experience.</p> ]]></content:encoded> <dc:title>Identifying Learners&amp;rsquo; Confusion in a MOOC Forum Across Domains Using Explainable Deep Transfer Learning</dc:title> <dc:creator>Rahaf Alsuhaimi</dc:creator> <dc:creator>Omaima Almatrafi</dc:creator> <dc:identifier>doi: 10.3390/info15110681</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-11-01</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-11-01</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>681</prism:startingPage> <prism:doi>10.3390/info15110681</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/681</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/680"> <title>Information, Vol. 15, Pages 680: Leveraging Agent-Based Modeling and IoT for Enhanced E-Commerce Strategies</title> <link>https://www.mdpi.com/2078-2489/15/11/680</link> <description>The increasing demand for consumers to engage in e-commerce &amp;ldquo;anytime, anywhere&amp;rdquo; necessitates more advanced and integrated solutions. This paper presents a novel approach for integrating e-commerce platforms with the Internet of Things (IoT) through the use of agent-based models. The key objective is to create a multi-agent system that optimizes interactions between IoT devices and e-commerce systems, thereby improving operational efficiency, adaptability, and user experience in online transactions. In this system, independent agents act as intermediaries, facilitating communication and enabling decentralized decision making. This architecture allows the system to adjust dynamically to environmental changes while managing complex tasks, such as real-time inventory monitoring and personalized product recommendations. The paper provides a comprehensive overview of the system&amp;rsquo;s framework, design principles, and algorithms, highlighting the robustness and flexibility of the proposed structure. The effectiveness of this model is validated through simulations and case studies, demonstrating its capacity to handle large data volumes, ensure security and privacy, and maintain seamless interoperability among a variety of IoT devices and e-commerce platforms. The findings suggest that this system offers a viable solution to the challenges of integrating IoT into e-commerce, contributing to both academic research and practical applications in the field.</description> <pubDate>2024-10-31</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 680: Leveraging Agent-Based Modeling and IoT for Enhanced E-Commerce Strategies</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/680">doi: 10.3390/info15110680</a></p> <p>Authors: Mohamed Shili Sajid Anwar </p> <p>The increasing demand for consumers to engage in e-commerce &amp;ldquo;anytime, anywhere&amp;rdquo; necessitates more advanced and integrated solutions. This paper presents a novel approach for integrating e-commerce platforms with the Internet of Things (IoT) through the use of agent-based models. The key objective is to create a multi-agent system that optimizes interactions between IoT devices and e-commerce systems, thereby improving operational efficiency, adaptability, and user experience in online transactions. In this system, independent agents act as intermediaries, facilitating communication and enabling decentralized decision making. This architecture allows the system to adjust dynamically to environmental changes while managing complex tasks, such as real-time inventory monitoring and personalized product recommendations. The paper provides a comprehensive overview of the system&amp;rsquo;s framework, design principles, and algorithms, highlighting the robustness and flexibility of the proposed structure. The effectiveness of this model is validated through simulations and case studies, demonstrating its capacity to handle large data volumes, ensure security and privacy, and maintain seamless interoperability among a variety of IoT devices and e-commerce platforms. The findings suggest that this system offers a viable solution to the challenges of integrating IoT into e-commerce, contributing to both academic research and practical applications in the field.</p> ]]></content:encoded> <dc:title>Leveraging Agent-Based Modeling and IoT for Enhanced E-Commerce Strategies</dc:title> <dc:creator>Mohamed Shili</dc:creator> <dc:creator>Sajid Anwar</dc:creator> <dc:identifier>doi: 10.3390/info15110680</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-31</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-31</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>680</prism:startingPage> <prism:doi>10.3390/info15110680</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/680</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/679"> <title>Information, Vol. 15, Pages 679: The Personality of the Intelligent Cockpit? Exploring the Personality Traits of In-Vehicle LLMs with Psychometrics</title> <link>https://www.mdpi.com/2078-2489/15/11/679</link> <description>The development of large language models (LLMs) has promoted a transformation of human&amp;ndash;computer interaction (HCI) models and has attracted the attention of scholars to the evaluation of personality traits of LLMs. As an important interface for the HCI and human&amp;ndash;machine interface (HMI) in the future, the intelligent cockpit has become one of LLM&amp;rsquo;s most important application scenarios. When in-vehicle intelligent systems based on in-vehicle LLMs begin to become human assistants or even partners, it has become important to study the &amp;ldquo;personality&amp;rdquo; of in-vehicle LLMs. Referring to the relevant research on personality traits of LLMs, this study selected the psychological scales Big Five Inventory-2 (BFI-2), Myers&amp;ndash;Briggs Type Indicator (MBTI), and Short Dark Triad (SD-3) to establish a personality traits evaluation framework for in-vehicle LLMs. Then, we used this framework to evaluate the personality of three in-vehicle LLMs. The results showed that psychological scales can be used to measure the personality traits of in-vehicle LLMs. In-vehicle LLMs showed commonalities in extroversion, agreeableness, conscientiousness, and action patterns, yet differences in openness, perception, decision-making, information acquisition methods, and psychopathy. According to the results, we established anthropomorphic personality personas of different in-vehicle LLMs. This study represents a novel attempt to evaluate the personalities of in-vehicle LLMs. The experimental results deepen our understanding of in-vehicle LLMs and contribute to the further exploration of personalized fine-tuning of in-vehicle LLMs and the improvement in the user experience of the automobile in the future.</description> <pubDate>2024-10-31</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 679: The Personality of the Intelligent Cockpit? Exploring the Personality Traits of In-Vehicle LLMs with Psychometrics</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/679">doi: 10.3390/info15110679</a></p> <p>Authors: Qianli Lin Zhipeng Hu Jun Ma </p> <p>The development of large language models (LLMs) has promoted a transformation of human&amp;ndash;computer interaction (HCI) models and has attracted the attention of scholars to the evaluation of personality traits of LLMs. As an important interface for the HCI and human&amp;ndash;machine interface (HMI) in the future, the intelligent cockpit has become one of LLM&amp;rsquo;s most important application scenarios. When in-vehicle intelligent systems based on in-vehicle LLMs begin to become human assistants or even partners, it has become important to study the &amp;ldquo;personality&amp;rdquo; of in-vehicle LLMs. Referring to the relevant research on personality traits of LLMs, this study selected the psychological scales Big Five Inventory-2 (BFI-2), Myers&amp;ndash;Briggs Type Indicator (MBTI), and Short Dark Triad (SD-3) to establish a personality traits evaluation framework for in-vehicle LLMs. Then, we used this framework to evaluate the personality of three in-vehicle LLMs. The results showed that psychological scales can be used to measure the personality traits of in-vehicle LLMs. In-vehicle LLMs showed commonalities in extroversion, agreeableness, conscientiousness, and action patterns, yet differences in openness, perception, decision-making, information acquisition methods, and psychopathy. According to the results, we established anthropomorphic personality personas of different in-vehicle LLMs. This study represents a novel attempt to evaluate the personalities of in-vehicle LLMs. The experimental results deepen our understanding of in-vehicle LLMs and contribute to the further exploration of personalized fine-tuning of in-vehicle LLMs and the improvement in the user experience of the automobile in the future.</p> ]]></content:encoded> <dc:title>The Personality of the Intelligent Cockpit? Exploring the Personality Traits of In-Vehicle LLMs with Psychometrics</dc:title> <dc:creator>Qianli Lin</dc:creator> <dc:creator>Zhipeng Hu</dc:creator> <dc:creator>Jun Ma</dc:creator> <dc:identifier>doi: 10.3390/info15110679</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-31</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-31</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>679</prism:startingPage> <prism:doi>10.3390/info15110679</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/679</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/678"> <title>Information, Vol. 15, Pages 678: Artificial Intelligence (AI) Integration in Urban Decision-Making Processes: Convergence and Divergence with the Multi-Criteria Analysis (MCA)</title> <link>https://www.mdpi.com/2078-2489/15/11/678</link> <description>The dynamics underpinning the urban landscape change are primarily driven by social, economic, and environmental issues. Owing to the population&amp;rsquo;s fluctuating needs, a new and dual perspective of urban space emerges. The Artificial Intelligence (AI) of a territory, or the system of technical diligence associated with the anthropocentric world, makes sense in the context of this temporal mismatch between territorial processes and utilitarian apparatus. This creates cerebral connections between several concurrent decision-making systems, leading to numerous perspectives of the same urban environment, often filtered by the people whose interests direct the information flow till the transformability. In contrast to the conventional methodologies of decision analysis, which are employed to facilitate convenient judgments between alternative options, innovative Artificial Intelligence tools are gaining traction as a means of more effectively evaluating and selecting fast-track solutions. The study&amp;rsquo;s goal is to investigate the cross-functional relationships between Artificial Intelligence (AI) and current decision-making support systems, which are increasingly being used to interpret urban growth and development from a multi-dimensional perspective, such as a multi-criteria one. Individuals in charge of administering and governing a territory will gain from artificial intelligence techniques because they will be able to test resilience and responsibility in decision-making circumstances while also responding fast and spontaneously to community requirements. The study evaluates current grading techniques and recommends areas for future upgrades via the lens of the potentials afforded by AI technology to the establishment of digitization pathways for technological advancements in the urban valuation.</description> <pubDate>2024-10-31</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 678: Artificial Intelligence (AI) Integration in Urban Decision-Making Processes: Convergence and Divergence with the Multi-Criteria Analysis (MCA)</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/678">doi: 10.3390/info15110678</a></p> <p>Authors: Maria Rosaria Guarini Francesco Sica Alejandro Segura </p> <p>The dynamics underpinning the urban landscape change are primarily driven by social, economic, and environmental issues. Owing to the population&amp;rsquo;s fluctuating needs, a new and dual perspective of urban space emerges. The Artificial Intelligence (AI) of a territory, or the system of technical diligence associated with the anthropocentric world, makes sense in the context of this temporal mismatch between territorial processes and utilitarian apparatus. This creates cerebral connections between several concurrent decision-making systems, leading to numerous perspectives of the same urban environment, often filtered by the people whose interests direct the information flow till the transformability. In contrast to the conventional methodologies of decision analysis, which are employed to facilitate convenient judgments between alternative options, innovative Artificial Intelligence tools are gaining traction as a means of more effectively evaluating and selecting fast-track solutions. The study&amp;rsquo;s goal is to investigate the cross-functional relationships between Artificial Intelligence (AI) and current decision-making support systems, which are increasingly being used to interpret urban growth and development from a multi-dimensional perspective, such as a multi-criteria one. Individuals in charge of administering and governing a territory will gain from artificial intelligence techniques because they will be able to test resilience and responsibility in decision-making circumstances while also responding fast and spontaneously to community requirements. The study evaluates current grading techniques and recommends areas for future upgrades via the lens of the potentials afforded by AI technology to the establishment of digitization pathways for technological advancements in the urban valuation.</p> ]]></content:encoded> <dc:title>Artificial Intelligence (AI) Integration in Urban Decision-Making Processes: Convergence and Divergence with the Multi-Criteria Analysis (MCA)</dc:title> <dc:creator>Maria Rosaria Guarini</dc:creator> <dc:creator>Francesco Sica</dc:creator> <dc:creator>Alejandro Segura</dc:creator> <dc:identifier>doi: 10.3390/info15110678</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-31</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-31</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>678</prism:startingPage> <prism:doi>10.3390/info15110678</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/678</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/677"> <title>Information, Vol. 15, Pages 677: Emotion-Recognition System for Smart Environments Using Acoustic Information (ERSSE)</title> <link>https://www.mdpi.com/2078-2489/15/11/677</link> <description>Acoustic management is very important for detecting possible events in the context of a smart environment (SE). In previous works, we proposed a reflective middleware for acoustic management (ReM-AM) and its autonomic cycles of data analysis tasks, along with its ontology-driven architecture. In this work, we aim to develop an emotion-recognition system for ReM-AM that uses sound events, rather than speech, as its main focus. The system is based on a sound pattern for emotion recognition and the autonomic cycle of intelligent sound analysis (ISA), defined by three tasks: variable extraction, sound data analysis, and emotion recommendation. We include a case study to test our emotion-recognition system in a simulation of a smart movie theater, with different situations taking place. The implementation and verification of the tasks show a promising performance in the case study, with 80% accuracy in sound recognition, and its general behavior shows that it can contribute to improving the well-being of the people present in the environment.</description> <pubDate>2024-10-30</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 677: Emotion-Recognition System for Smart Environments Using Acoustic Information (ERSSE)</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/677">doi: 10.3390/info15110677</a></p> <p>Authors: Gabriela Santiago Jose Aguilar Rodrigo Garc铆a </p> <p>Acoustic management is very important for detecting possible events in the context of a smart environment (SE). In previous works, we proposed a reflective middleware for acoustic management (ReM-AM) and its autonomic cycles of data analysis tasks, along with its ontology-driven architecture. In this work, we aim to develop an emotion-recognition system for ReM-AM that uses sound events, rather than speech, as its main focus. The system is based on a sound pattern for emotion recognition and the autonomic cycle of intelligent sound analysis (ISA), defined by three tasks: variable extraction, sound data analysis, and emotion recommendation. We include a case study to test our emotion-recognition system in a simulation of a smart movie theater, with different situations taking place. The implementation and verification of the tasks show a promising performance in the case study, with 80% accuracy in sound recognition, and its general behavior shows that it can contribute to improving the well-being of the people present in the environment.</p> ]]></content:encoded> <dc:title>Emotion-Recognition System for Smart Environments Using Acoustic Information (ERSSE)</dc:title> <dc:creator>Gabriela Santiago</dc:creator> <dc:creator>Jose Aguilar</dc:creator> <dc:creator>Rodrigo Garc铆a</dc:creator> <dc:identifier>doi: 10.3390/info15110677</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-30</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-30</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>677</prism:startingPage> <prism:doi>10.3390/info15110677</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/677</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/676"> <title>Information, Vol. 15, Pages 676: Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review</title> <link>https://www.mdpi.com/2078-2489/15/11/676</link> <description>(1) Background: The development of generative artificial intelligence (GAI) is transforming higher education. This systematic literature review synthesizes recent empirical studies on the use of GAI, focusing on its impact on teaching, learning, and institutional practices. (2) Methods: Following PRISMA guidelines, a comprehensive search strategy was employed to locate scientific articles on GAI in higher education published by Scopus and Web of Science between January 2023 and January 2024. (3) Results: The search identified 102 articles, with 37 meeting the inclusion criteria. These studies were grouped into three themes: the application of GAI technologies, stakeholder acceptance and perceptions, and specific use situations. (4) Discussion: Key findings include GAI&amp;rsquo;s versatility and potential use, student acceptance, and educational enhancement. However, challenges such as assessment practices, institutional strategies, and risks to academic integrity were also noted. (5) Conclusions: The findings help identify potential directions for future research, including assessment integrity and pedagogical strategies, ethical considerations and policy development, the impact on teaching and learning processes, the perceptions of students and instructors, technological advancements, and the preparation of future skills and workforce readiness. The study has certain limitations, particularly due to the short time frame and the search criteria, which might have varied if conducted by different researchers.</description> <pubDate>2024-10-28</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 676: Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/676">doi: 10.3390/info15110676</a></p> <p>Authors: Jo茫o Batista Anabela Mesquita Gon莽alo Carnaz </p> <p>(1) Background: The development of generative artificial intelligence (GAI) is transforming higher education. This systematic literature review synthesizes recent empirical studies on the use of GAI, focusing on its impact on teaching, learning, and institutional practices. (2) Methods: Following PRISMA guidelines, a comprehensive search strategy was employed to locate scientific articles on GAI in higher education published by Scopus and Web of Science between January 2023 and January 2024. (3) Results: The search identified 102 articles, with 37 meeting the inclusion criteria. These studies were grouped into three themes: the application of GAI technologies, stakeholder acceptance and perceptions, and specific use situations. (4) Discussion: Key findings include GAI&amp;rsquo;s versatility and potential use, student acceptance, and educational enhancement. However, challenges such as assessment practices, institutional strategies, and risks to academic integrity were also noted. (5) Conclusions: The findings help identify potential directions for future research, including assessment integrity and pedagogical strategies, ethical considerations and policy development, the impact on teaching and learning processes, the perceptions of students and instructors, technological advancements, and the preparation of future skills and workforce readiness. The study has certain limitations, particularly due to the short time frame and the search criteria, which might have varied if conducted by different researchers.</p> ]]></content:encoded> <dc:title>Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review</dc:title> <dc:creator>Jo茫o Batista</dc:creator> <dc:creator>Anabela Mesquita</dc:creator> <dc:creator>Gon莽alo Carnaz</dc:creator> <dc:identifier>doi: 10.3390/info15110676</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-28</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-28</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>676</prism:startingPage> <prism:doi>10.3390/info15110676</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/676</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/675"> <title>Information, Vol. 15, Pages 675: Audio-Driven Facial Animation with Deep Learning: A Survey</title> <link>https://www.mdpi.com/2078-2489/15/11/675</link> <description>Audio-driven facial animation is a rapidly evolving field that aims to generate realistic facial expressions and lip movements synchronized with a given audio input. This survey provides a comprehensive review of deep learning techniques applied to audio-driven facial animation, with a focus on both audio-driven facial image animation and audio-driven facial mesh animation. These approaches employ deep learning to map audio inputs directly onto 3D facial meshes or 2D images, enabling the creation of highly realistic and synchronized animations. This survey also explores evaluation metrics, available datasets, and the challenges that remain, such as disentangling lip synchronization and emotions, generalization across speakers, and dataset limitations. Lastly, we discuss future directions, including multi-modal integration, personalized models, and facial attribute modification in animations, all of which are critical for the continued development and application of this technology.</description> <pubDate>2024-10-28</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 675: Audio-Driven Facial Animation with Deep Learning: A Survey</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/675">doi: 10.3390/info15110675</a></p> <p>Authors: Diqiong Jiang Jian Chang Lihua You Shaojun Bian Robert Kosk Greg Maguire </p> <p>Audio-driven facial animation is a rapidly evolving field that aims to generate realistic facial expressions and lip movements synchronized with a given audio input. This survey provides a comprehensive review of deep learning techniques applied to audio-driven facial animation, with a focus on both audio-driven facial image animation and audio-driven facial mesh animation. These approaches employ deep learning to map audio inputs directly onto 3D facial meshes or 2D images, enabling the creation of highly realistic and synchronized animations. This survey also explores evaluation metrics, available datasets, and the challenges that remain, such as disentangling lip synchronization and emotions, generalization across speakers, and dataset limitations. Lastly, we discuss future directions, including multi-modal integration, personalized models, and facial attribute modification in animations, all of which are critical for the continued development and application of this technology.</p> ]]></content:encoded> <dc:title>Audio-Driven Facial Animation with Deep Learning: A Survey</dc:title> <dc:creator>Diqiong Jiang</dc:creator> <dc:creator>Jian Chang</dc:creator> <dc:creator>Lihua You</dc:creator> <dc:creator>Shaojun Bian</dc:creator> <dc:creator>Robert Kosk</dc:creator> <dc:creator>Greg Maguire</dc:creator> <dc:identifier>doi: 10.3390/info15110675</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-28</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-28</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>675</prism:startingPage> <prism:doi>10.3390/info15110675</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/675</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/674"> <title>Information, Vol. 15, Pages 674: Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite</title> <link>https://www.mdpi.com/2078-2489/15/11/674</link> <description>This paper proposes a new benchmark specifically designed for in-sensor digital machine learning computing to meet an ultra-low embedded memory requirement. With the exponential growth of edge devices, efficient local processing is essential to mitigate economic costs, latency, and privacy concerns associated with the centralized cloud processing. Emerging intelligent sensors equipped with computing assets to run neural network inferences and embedded in the same package, which hosts the sensing elements, present new challenges due to their limited memory resources and computational skills. This benchmark evaluates models trained with Quantization Aware Training (QAT) and compares their performance with Post-Training Quantization (PTQ) across three use cases: Human Activity Recognition (HAR) by means of the SHL dataset, Physical Activity Monitoring (PAM) by means of the PAMAP2 dataset, and superficial electromyography (sEMG) regression with the NINAPRO DB8 dataset. The results demonstrate the effectiveness of QAT over PTQ in most scenarios, highlighting the potential for deploying advanced AI models on highly resource-constrained sensors. The INT8 versions of the models always outperformed their FP32, regarding memory and latency reductions, except for the activations for CNN. The CNN model exhibited reduced memory usage and latency with respect to its Dense counterpart, allowing it to meet the stringent 8KiB data RAM and 32 KiB program RAM limits of the ISPU. The TCN model proved to be too large to fit within the memory constraints of the ISPU, primarily due to its greater capacity in terms of number of parameters, designed for processing more complex signals like EMG. This benchmark aims to guide the development of efficient AI solutions for In-Sensor Machine Learning Computing, fostering innovation in the field of Edge AI benchmarking, such as the one conducted by the MLCommons-Tiny working group.</description> <pubDate>2024-10-28</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 674: Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/674">doi: 10.3390/info15110674</a></p> <p>Authors: Fabrizio Maria Aymone Danilo Pietro Pau </p> <p>This paper proposes a new benchmark specifically designed for in-sensor digital machine learning computing to meet an ultra-low embedded memory requirement. With the exponential growth of edge devices, efficient local processing is essential to mitigate economic costs, latency, and privacy concerns associated with the centralized cloud processing. Emerging intelligent sensors equipped with computing assets to run neural network inferences and embedded in the same package, which hosts the sensing elements, present new challenges due to their limited memory resources and computational skills. This benchmark evaluates models trained with Quantization Aware Training (QAT) and compares their performance with Post-Training Quantization (PTQ) across three use cases: Human Activity Recognition (HAR) by means of the SHL dataset, Physical Activity Monitoring (PAM) by means of the PAMAP2 dataset, and superficial electromyography (sEMG) regression with the NINAPRO DB8 dataset. The results demonstrate the effectiveness of QAT over PTQ in most scenarios, highlighting the potential for deploying advanced AI models on highly resource-constrained sensors. The INT8 versions of the models always outperformed their FP32, regarding memory and latency reductions, except for the activations for CNN. The CNN model exhibited reduced memory usage and latency with respect to its Dense counterpart, allowing it to meet the stringent 8KiB data RAM and 32 KiB program RAM limits of the ISPU. The TCN model proved to be too large to fit within the memory constraints of the ISPU, primarily due to its greater capacity in terms of number of parameters, designed for processing more complex signals like EMG. This benchmark aims to guide the development of efficient AI solutions for In-Sensor Machine Learning Computing, fostering innovation in the field of Edge AI benchmarking, such as the one conducted by the MLCommons-Tiny working group.</p> ]]></content:encoded> <dc:title>Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite</dc:title> <dc:creator>Fabrizio Maria Aymone</dc:creator> <dc:creator>Danilo Pietro Pau</dc:creator> <dc:identifier>doi: 10.3390/info15110674</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-28</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-28</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>674</prism:startingPage> <prism:doi>10.3390/info15110674</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/674</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/673"> <title>Information, Vol. 15, Pages 673: On a Simplified Approach to Achieve Parallel Performance and Portability Across CPU and GPU Architectures</title> <link>https://www.mdpi.com/2078-2489/15/11/673</link> <description>This paper presents software advances to easily exploit computer architectures consisting of a multi-core CPU and CPU+GPU to accelerate diverse types of high-performance computing (HPC) applications using a single code implementation. The paper describes and demonstrates the performance of the open-source C++ matrix and array (MATAR) library that uniquely offers: (1) a straightforward syntax for programming productivity, (2) usable data structures for data-oriented programming (DOP) for performance, and (3) a simple interface to the open-source C++ Kokkos library for portability and memory management across CPUs and GPUs. The portability across architectures with a single code implementation is achieved by automatically switching between diverse fine-grained parallelism backends (e.g., CUDA, HIP, OpenMP, pthreads, etc.) at compile time. The MATAR library solves many longstanding challenges associated with easily writing software that can run in parallel on any computer architecture. This work benefits projects seeking to write new C++ codes while also addressing the challenges of quickly making existing Fortran codes performant and portable over modern computer architectures with minimal syntactical changes from Fortran to C++. We demonstrate the feasibility of readily writing new C++ codes and modernizing existing codes with MATAR to be performant, parallel, and portable across diverse computer architectures.</description> <pubDate>2024-10-28</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 673: On a Simplified Approach to Achieve Parallel Performance and Portability Across CPU and GPU Architectures</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/673">doi: 10.3390/info15110673</a></p> <p>Authors: Nathaniel Morgan Caleb Yenusah Adrian Diaz Daniel Dunning Jacob Moore Erin Heilman Calvin Roth Evan Lieberman Steven Walton Sarah Brown Daniel Holladay Marko Knezevic Gavin Whetstone Zachary Baker Robert Robey </p> <p>This paper presents software advances to easily exploit computer architectures consisting of a multi-core CPU and CPU+GPU to accelerate diverse types of high-performance computing (HPC) applications using a single code implementation. The paper describes and demonstrates the performance of the open-source C++ matrix and array (MATAR) library that uniquely offers: (1) a straightforward syntax for programming productivity, (2) usable data structures for data-oriented programming (DOP) for performance, and (3) a simple interface to the open-source C++ Kokkos library for portability and memory management across CPUs and GPUs. The portability across architectures with a single code implementation is achieved by automatically switching between diverse fine-grained parallelism backends (e.g., CUDA, HIP, OpenMP, pthreads, etc.) at compile time. The MATAR library solves many longstanding challenges associated with easily writing software that can run in parallel on any computer architecture. This work benefits projects seeking to write new C++ codes while also addressing the challenges of quickly making existing Fortran codes performant and portable over modern computer architectures with minimal syntactical changes from Fortran to C++. We demonstrate the feasibility of readily writing new C++ codes and modernizing existing codes with MATAR to be performant, parallel, and portable across diverse computer architectures.</p> ]]></content:encoded> <dc:title>On a Simplified Approach to Achieve Parallel Performance and Portability Across CPU and GPU Architectures</dc:title> <dc:creator>Nathaniel Morgan</dc:creator> <dc:creator>Caleb Yenusah</dc:creator> <dc:creator>Adrian Diaz</dc:creator> <dc:creator>Daniel Dunning</dc:creator> <dc:creator>Jacob Moore</dc:creator> <dc:creator>Erin Heilman</dc:creator> <dc:creator>Calvin Roth</dc:creator> <dc:creator>Evan Lieberman</dc:creator> <dc:creator>Steven Walton</dc:creator> <dc:creator>Sarah Brown</dc:creator> <dc:creator>Daniel Holladay</dc:creator> <dc:creator>Marko Knezevic</dc:creator> <dc:creator>Gavin Whetstone</dc:creator> <dc:creator>Zachary Baker</dc:creator> <dc:creator>Robert Robey</dc:creator> <dc:identifier>doi: 10.3390/info15110673</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-28</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-28</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>673</prism:startingPage> <prism:doi>10.3390/info15110673</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/673</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/670"> <title>Information, Vol. 15, Pages 670: Efficient Schemes for Optimizing Load Balancing and Communication Cost in Edge Computing Networks</title> <link>https://www.mdpi.com/2078-2489/15/11/670</link> <description>Edge computing architectures promise increased quality of service with low communication delays by bringing cloud services closer to the end-users, at the distributed edge servers of the network edge. Hosting server capabilities at access nodes, thereby yielding edge service nodes, offers service proximity to users and provides QoS guarantees. However, the placement of edge servers should match the level of demand for computing resources and the location of user load. Thus, it is necessary to devise schemes that select the most appropriate access nodes to host computing services and associate every remaining access node with the most proper service node to ensure optimal service delivery. In this paper, we formulate this problem as an optimization problem with a bi-objective function that aims at both communication cost minimization and load balance optimization. We propose schemes that tackle this problem and compare their performance against previously proposed heuristics that have been also adapted to target both optimization goals. We study how these algorithms behave in lattice and random grid network topologies with uniform and non-uniform workloads. The results validate the efficiency of our proposed schemes in addition to the significantly lower execution times compared to the other heuristics.</description> <pubDate>2024-10-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 670: Efficient Schemes for Optimizing Load Balancing and Communication Cost in Edge Computing Networks</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/670">doi: 10.3390/info15110670</a></p> <p>Authors: Efthymios Oikonomou Angelos Rouskas </p> <p>Edge computing architectures promise increased quality of service with low communication delays by bringing cloud services closer to the end-users, at the distributed edge servers of the network edge. Hosting server capabilities at access nodes, thereby yielding edge service nodes, offers service proximity to users and provides QoS guarantees. However, the placement of edge servers should match the level of demand for computing resources and the location of user load. Thus, it is necessary to devise schemes that select the most appropriate access nodes to host computing services and associate every remaining access node with the most proper service node to ensure optimal service delivery. In this paper, we formulate this problem as an optimization problem with a bi-objective function that aims at both communication cost minimization and load balance optimization. We propose schemes that tackle this problem and compare their performance against previously proposed heuristics that have been also adapted to target both optimization goals. We study how these algorithms behave in lattice and random grid network topologies with uniform and non-uniform workloads. The results validate the efficiency of our proposed schemes in addition to the significantly lower execution times compared to the other heuristics.</p> ]]></content:encoded> <dc:title>Efficient Schemes for Optimizing Load Balancing and Communication Cost in Edge Computing Networks</dc:title> <dc:creator>Efthymios Oikonomou</dc:creator> <dc:creator>Angelos Rouskas</dc:creator> <dc:identifier>doi: 10.3390/info15110670</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>670</prism:startingPage> <prism:doi>10.3390/info15110670</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/670</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/671"> <title>Information, Vol. 15, Pages 671: Effects of Generative AI in Tourism Industry</title> <link>https://www.mdpi.com/2078-2489/15/11/671</link> <description>In the dynamic and evolving tourism industry, engaging with stakeholders is essential for fostering innovation and improving service quality. However, tourism companies often struggle to meet expectations for customer satisfaction through interactivity and real-time feedback. While new digital technologies can address the challenge of providing personalized travel experiences, they can also increase the workload for travel agencies due to the maintenance and updates required to keep travel details current. Intelligent chatbots and other generative artificial intelligence (GAI) tools can help mitigate these obstacles by transforming tourism and travel-related services, offering interactive guidance for both tourism companies and travelers. In this study, we explore and compare the main characteristics of existing responsive AI instruments applicable in tourism and hospitality scenarios. Then, we propose a new theoretical framework for decision making in the tourism industry, integrating GAI technologies to enable agencies to create and manage itineraries, and tourists to interact online with these innovative instruments. The advantages of the proposed framework are as follows: (1) providing a comprehensive understanding of the transformative potential of new generation AI tools in tourism and facilitating their effective implementation; (2) offering a holistic methodology to enhance the tourist experience; (3) unifying the applications of contemporary AI instruments in tourism activities and paving the way for their further development. The study contributes to the expanding literature on tourism modernization and offers recommendations for industry practitioners, consumers, and local, regional, and national tourism bodies to adopt a more user-centric approach to enhancing travel services.</description> <pubDate>2024-10-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 671: Effects of Generative AI in Tourism Industry</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/671">doi: 10.3390/info15110671</a></p> <p>Authors: Galina Ilieva Tania Yankova Stanislava Klisarova-Belcheva </p> <p>In the dynamic and evolving tourism industry, engaging with stakeholders is essential for fostering innovation and improving service quality. However, tourism companies often struggle to meet expectations for customer satisfaction through interactivity and real-time feedback. While new digital technologies can address the challenge of providing personalized travel experiences, they can also increase the workload for travel agencies due to the maintenance and updates required to keep travel details current. Intelligent chatbots and other generative artificial intelligence (GAI) tools can help mitigate these obstacles by transforming tourism and travel-related services, offering interactive guidance for both tourism companies and travelers. In this study, we explore and compare the main characteristics of existing responsive AI instruments applicable in tourism and hospitality scenarios. Then, we propose a new theoretical framework for decision making in the tourism industry, integrating GAI technologies to enable agencies to create and manage itineraries, and tourists to interact online with these innovative instruments. The advantages of the proposed framework are as follows: (1) providing a comprehensive understanding of the transformative potential of new generation AI tools in tourism and facilitating their effective implementation; (2) offering a holistic methodology to enhance the tourist experience; (3) unifying the applications of contemporary AI instruments in tourism activities and paving the way for their further development. The study contributes to the expanding literature on tourism modernization and offers recommendations for industry practitioners, consumers, and local, regional, and national tourism bodies to adopt a more user-centric approach to enhancing travel services.</p> ]]></content:encoded> <dc:title>Effects of Generative AI in Tourism Industry</dc:title> <dc:creator>Galina Ilieva</dc:creator> <dc:creator>Tania Yankova</dc:creator> <dc:creator>Stanislava Klisarova-Belcheva</dc:creator> <dc:identifier>doi: 10.3390/info15110671</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>671</prism:startingPage> <prism:doi>10.3390/info15110671</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/671</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/672"> <title>Information, Vol. 15, Pages 672: Hands-On and Virtual Laboratories in Electronic Circuits Learning&mdash;Knowledge and Skills Acquisition</title> <link>https://www.mdpi.com/2078-2489/15/11/672</link> <description>Hands-on and virtual laboratory-based learning has been integrated into science education due to its potential positive impact on students&amp;rsquo; knowledge and skills development. In this study, we explore the effect of the hands-on and virtual laboratories on 152 undergraduate students&amp;rsquo; conceptual knowledge, inquiry, and measurement skills acquisition in the domain of operational amplifiers (op-amps) circuit learning. Students were divided into two groups and performed individually three experimental exercises involving basic op-amps electronic circuits: students in the Hands-On group performed the exercises in a physical laboratory environment, while students in the Virtual group performed the exercises in a virtual environment with TINA-TI (v9) software. Pre-post tests were used to quantify student performance progress stemming from their laboratory-type activities. Based on our findings, knowledge was developed the most, followed by inquiry skills, and finally, skills related to measuring electronic current quantities in a circuit, F(2,456) = 44.183, p = 0.000. Additionally, an ANCOVA analysis comparing the means of the three exercises revealed that the group participating in hands-on activities outperformed the group engaged in virtual activities, F(1,152) = 9.039, p = 0.003. Finally, we recommend designing a curriculum that focuses on both cognitive growth and skills development in the domain of op-amps.</description> <pubDate>2024-10-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 672: Hands-On and Virtual Laboratories in Electronic Circuits Learning&mdash;Knowledge and Skills Acquisition</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/672">doi: 10.3390/info15110672</a></p> <p>Authors: Christos Tokatlidis Sokratis Tselegkaridis Sophia Rapti Theodosios Sapounidis Dimitrios Papakostas </p> <p>Hands-on and virtual laboratory-based learning has been integrated into science education due to its potential positive impact on students&amp;rsquo; knowledge and skills development. In this study, we explore the effect of the hands-on and virtual laboratories on 152 undergraduate students&amp;rsquo; conceptual knowledge, inquiry, and measurement skills acquisition in the domain of operational amplifiers (op-amps) circuit learning. Students were divided into two groups and performed individually three experimental exercises involving basic op-amps electronic circuits: students in the Hands-On group performed the exercises in a physical laboratory environment, while students in the Virtual group performed the exercises in a virtual environment with TINA-TI (v9) software. Pre-post tests were used to quantify student performance progress stemming from their laboratory-type activities. Based on our findings, knowledge was developed the most, followed by inquiry skills, and finally, skills related to measuring electronic current quantities in a circuit, F(2,456) = 44.183, p = 0.000. Additionally, an ANCOVA analysis comparing the means of the three exercises revealed that the group participating in hands-on activities outperformed the group engaged in virtual activities, F(1,152) = 9.039, p = 0.003. Finally, we recommend designing a curriculum that focuses on both cognitive growth and skills development in the domain of op-amps.</p> ]]></content:encoded> <dc:title>Hands-On and Virtual Laboratories in Electronic Circuits Learning&amp;mdash;Knowledge and Skills Acquisition</dc:title> <dc:creator>Christos Tokatlidis</dc:creator> <dc:creator>Sokratis Tselegkaridis</dc:creator> <dc:creator>Sophia Rapti</dc:creator> <dc:creator>Theodosios Sapounidis</dc:creator> <dc:creator>Dimitrios Papakostas</dc:creator> <dc:identifier>doi: 10.3390/info15110672</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>672</prism:startingPage> <prism:doi>10.3390/info15110672</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/672</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/669"> <title>Information, Vol. 15, Pages 669: Building Bio-Ontology Graphs from Data Using Logic and NLP</title> <link>https://www.mdpi.com/2078-2489/15/11/669</link> <description>In this age of big data and natural language processing, to what extent can we leverage new technologies and new tools to make progress in organizing disparate biomedical data sources? Imagine a system in which one could bring together sequencing data with phenotypes, gene expression data, and clinical information all under the same conceptual heading where applicable. Bio-ontologies seek to carry this out by organizing the relations between concepts and attaching the data to their corresponding concept. However, to accomplish this, we need considerable time and human input. Instead of resorting to human input alone, we describe a novel approach to obtaining the foundation for bio-ontologies: obtaining propositions (links between concepts) from biomedical text so as to fill the ontology. The heart of our approach is applying logic rules from Aristotelian logic and natural logic to biomedical information to derive propositions so that we can have material to organize knowledge bases (ontologies) for biomedical research. We demonstrate this approach by constructing a proof-of-principle bio-ontology for COVID-19 and related diseases.</description> <pubDate>2024-10-25</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 669: Building Bio-Ontology Graphs from Data Using Logic and NLP</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/669">doi: 10.3390/info15110669</a></p> <p>Authors: Theresa Gasser Erick Chastain </p> <p>In this age of big data and natural language processing, to what extent can we leverage new technologies and new tools to make progress in organizing disparate biomedical data sources? Imagine a system in which one could bring together sequencing data with phenotypes, gene expression data, and clinical information all under the same conceptual heading where applicable. Bio-ontologies seek to carry this out by organizing the relations between concepts and attaching the data to their corresponding concept. However, to accomplish this, we need considerable time and human input. Instead of resorting to human input alone, we describe a novel approach to obtaining the foundation for bio-ontologies: obtaining propositions (links between concepts) from biomedical text so as to fill the ontology. The heart of our approach is applying logic rules from Aristotelian logic and natural logic to biomedical information to derive propositions so that we can have material to organize knowledge bases (ontologies) for biomedical research. We demonstrate this approach by constructing a proof-of-principle bio-ontology for COVID-19 and related diseases.</p> ]]></content:encoded> <dc:title>Building Bio-Ontology Graphs from Data Using Logic and NLP</dc:title> <dc:creator>Theresa Gasser</dc:creator> <dc:creator>Erick Chastain</dc:creator> <dc:identifier>doi: 10.3390/info15110669</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-25</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-25</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>669</prism:startingPage> <prism:doi>10.3390/info15110669</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/669</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/668"> <title>Information, Vol. 15, Pages 668: A Note on Equivalent and Nonequivalent Parametrizations of the Two-Parameter Logistic Item Response Model</title> <link>https://www.mdpi.com/2078-2489/15/11/668</link> <description>The two-parameter logistic (2PL) item response model is typically estimated using an unbounded distribution for the trait &amp;theta;. In this article, alternative specifications of the 2PL models are investigated that consider a bounded or a positively valued &amp;theta; distribution. It is highlighted that these 2PL specifications correspond to the partial membership mastery model and the Ramsay quotient model, respectively. A simulation study revealed that model selection regarding alternative ranges of the &amp;theta; distribution can be successfully applied. Different 2PL specifications were additionally compared for six publicly available datasets.</description> <pubDate>2024-10-23</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 668: A Note on Equivalent and Nonequivalent Parametrizations of the Two-Parameter Logistic Item Response Model</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/668">doi: 10.3390/info15110668</a></p> <p>Authors: Alexander Robitzsch </p> <p>The two-parameter logistic (2PL) item response model is typically estimated using an unbounded distribution for the trait &amp;theta;. In this article, alternative specifications of the 2PL models are investigated that consider a bounded or a positively valued &amp;theta; distribution. It is highlighted that these 2PL specifications correspond to the partial membership mastery model and the Ramsay quotient model, respectively. A simulation study revealed that model selection regarding alternative ranges of the &amp;theta; distribution can be successfully applied. Different 2PL specifications were additionally compared for six publicly available datasets.</p> ]]></content:encoded> <dc:title>A Note on Equivalent and Nonequivalent Parametrizations of the Two-Parameter Logistic Item Response Model</dc:title> <dc:creator>Alexander Robitzsch</dc:creator> <dc:identifier>doi: 10.3390/info15110668</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-23</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-23</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>668</prism:startingPage> <prism:doi>10.3390/info15110668</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/668</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/667"> <title>Information, Vol. 15, Pages 667: Enhanced Chaotic Pseudorandom Number Generation Using Multiple Bernoulli Maps with Field Programmable Gate Array Optimizations</title> <link>https://www.mdpi.com/2078-2489/15/11/667</link> <description>Certain methods for implementing chaotic maps can lead to dynamic degradation of the generated number sequences. To solve such a problem, we develop a method for generating pseudorandom number sequences based on multiple one-dimensional chaotic maps. In particular, we introduce a Bernoulli chaotic map that utilizes function transformations and constraints on its control parameter, covering complementary regions of the phase space. This approach allows the generation of chaotic number sequences with a wide coverage of phase space, thereby increasing the uncertainty in the number sequence generation process. Moreover, by incorporating a scaling factor and a sine function, we develop a robust chaotic map, called the Sine-Multiple Modified Bernoulli Chaotic Map (SM-MBCM), which ensures a high degree of randomness, validated through statistical mechanics analysis tools. Using the SM-MBCM, we propose a chaotic PRNG (CPRNG) and evaluate its quality through correlation coefficient analysis, key sensitivity tests, statistical and entropy analysis, key space evaluation, linear complexity analysis, and performance tests. Furthermore, we present an FPGA-based implementation scheme that leverages equivalent MBCM variants to optimize the electronic implementation process. Finally, we compare the proposed system with existing designs in terms of throughput and key space.</description> <pubDate>2024-10-23</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 667: Enhanced Chaotic Pseudorandom Number Generation Using Multiple Bernoulli Maps with Field Programmable Gate Array Optimizations</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/667">doi: 10.3390/info15110667</a></p> <p>Authors: Leonardo Palacios-Luengas Reyna Carolina Medina-Ram铆rez Ricardo Marcel铆n-Jim茅nez Enrique Rodriguez-Colina Francisco R. Castillo-Soria Rub茅n V谩zquez-Medina </p> <p>Certain methods for implementing chaotic maps can lead to dynamic degradation of the generated number sequences. To solve such a problem, we develop a method for generating pseudorandom number sequences based on multiple one-dimensional chaotic maps. In particular, we introduce a Bernoulli chaotic map that utilizes function transformations and constraints on its control parameter, covering complementary regions of the phase space. This approach allows the generation of chaotic number sequences with a wide coverage of phase space, thereby increasing the uncertainty in the number sequence generation process. Moreover, by incorporating a scaling factor and a sine function, we develop a robust chaotic map, called the Sine-Multiple Modified Bernoulli Chaotic Map (SM-MBCM), which ensures a high degree of randomness, validated through statistical mechanics analysis tools. Using the SM-MBCM, we propose a chaotic PRNG (CPRNG) and evaluate its quality through correlation coefficient analysis, key sensitivity tests, statistical and entropy analysis, key space evaluation, linear complexity analysis, and performance tests. Furthermore, we present an FPGA-based implementation scheme that leverages equivalent MBCM variants to optimize the electronic implementation process. Finally, we compare the proposed system with existing designs in terms of throughput and key space.</p> ]]></content:encoded> <dc:title>Enhanced Chaotic Pseudorandom Number Generation Using Multiple Bernoulli Maps with Field Programmable Gate Array Optimizations</dc:title> <dc:creator>Leonardo Palacios-Luengas</dc:creator> <dc:creator>Reyna Carolina Medina-Ram铆rez</dc:creator> <dc:creator>Ricardo Marcel铆n-Jim茅nez</dc:creator> <dc:creator>Enrique Rodriguez-Colina</dc:creator> <dc:creator>Francisco R. Castillo-Soria</dc:creator> <dc:creator>Rub茅n V谩zquez-Medina</dc:creator> <dc:identifier>doi: 10.3390/info15110667</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-23</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-23</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>667</prism:startingPage> <prism:doi>10.3390/info15110667</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/667</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/666"> <title>Information, Vol. 15, Pages 666: Construction of Legal Knowledge Graph Based on Knowledge-Enhanced Large Language Models</title> <link>https://www.mdpi.com/2078-2489/15/11/666</link> <description>Legal knowledge involves multidimensional heterogeneous knowledge such as legal provisions, judicial interpretations, judicial cases, and defenses, which requires extremely high relevance and accuracy of knowledge. Meanwhile, the construction of a legal knowledge reasoning system also faces challenges in obtaining, processing, and sharing multisource heterogeneous knowledge. The knowledge graph technology, which is a knowledge organization form with triples as the basic unit, is able to efficiently transform multisource heterogeneous information into a knowledge representation form close to human cognition. Taking the automated construction of the Chinese legal knowledge graph (CLKG) as a case scenario, this paper presents a joint knowledge enhancement model (JKEM), where prior knowledge is embedded into a large language model (LLM), and the LLM is fine-tuned through the prefix of the prior knowledge data. Under the condition of freezing most parameters of the LLM, this fine-tuning scheme adds continuous deep prompts as prefix tokens to the input sequences of different layers, which can significantly improve the accuracy of knowledge extraction. The results show that the knowledge extraction accuracy of the JKEM in this paper reaches 90.92%. Based on the superior performance of this model, the CLKG is further constructed, which contains 3480 knowledge triples composed of 9 entities and 2 relationships, providing strong support for an in-depth understanding of the complex relationships in the legal field.</description> <pubDate>2024-10-23</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 666: Construction of Legal Knowledge Graph Based on Knowledge-Enhanced Large Language Models</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/666">doi: 10.3390/info15110666</a></p> <p>Authors: Jun Li Lu Qian Peifeng Liu Taoxiong Liu </p> <p>Legal knowledge involves multidimensional heterogeneous knowledge such as legal provisions, judicial interpretations, judicial cases, and defenses, which requires extremely high relevance and accuracy of knowledge. Meanwhile, the construction of a legal knowledge reasoning system also faces challenges in obtaining, processing, and sharing multisource heterogeneous knowledge. The knowledge graph technology, which is a knowledge organization form with triples as the basic unit, is able to efficiently transform multisource heterogeneous information into a knowledge representation form close to human cognition. Taking the automated construction of the Chinese legal knowledge graph (CLKG) as a case scenario, this paper presents a joint knowledge enhancement model (JKEM), where prior knowledge is embedded into a large language model (LLM), and the LLM is fine-tuned through the prefix of the prior knowledge data. Under the condition of freezing most parameters of the LLM, this fine-tuning scheme adds continuous deep prompts as prefix tokens to the input sequences of different layers, which can significantly improve the accuracy of knowledge extraction. The results show that the knowledge extraction accuracy of the JKEM in this paper reaches 90.92%. Based on the superior performance of this model, the CLKG is further constructed, which contains 3480 knowledge triples composed of 9 entities and 2 relationships, providing strong support for an in-depth understanding of the complex relationships in the legal field.</p> ]]></content:encoded> <dc:title>Construction of Legal Knowledge Graph Based on Knowledge-Enhanced Large Language Models</dc:title> <dc:creator>Jun Li</dc:creator> <dc:creator>Lu Qian</dc:creator> <dc:creator>Peifeng Liu</dc:creator> <dc:creator>Taoxiong Liu</dc:creator> <dc:identifier>doi: 10.3390/info15110666</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-23</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-23</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>666</prism:startingPage> <prism:doi>10.3390/info15110666</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/666</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/665"> <title>Information, Vol. 15, Pages 665: Online Learning from the Learning Cycle Perspective: Discovering Patterns in Recent Research</title> <link>https://www.mdpi.com/2078-2489/15/11/665</link> <description>We propose a method for automatically extracting new trends and best practices from the recent literature on online learning, aligned with the learning cycle perspective. Using titles and abstracts of research articles published in high ranked educational journals, we assign topic proportions to the articles, where the topics are aligned with the components of the learning cycle: engagement, exploration, explanation, elaboration, evaluation, and evolution. The topic analysis is conducted using keyword-based Latent Dirichlet allocation, and the topic keywords are chosen to reflect the nature of the learning cycle components. Our analysis reveals the time dynamics of research topics aligned on learning cycle components, component weights, and interconnections between them in the current research focus. Connections between the topics and user-defined learning elements are discovered. Concretely, we examine how effective learning elements such as virtual reality, multimedia, gamification, and problem-based learning are related to the learning cycle components in the literature. In this way, any innovative learning strategy or learning element can be placed in the landscape of the learning cycle topics. The analysis can be helpful to other researches when designing effective learning activities that address particular components of the learning cycle.</description> <pubDate>2024-10-22</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 665: Online Learning from the Learning Cycle Perspective: Discovering Patterns in Recent Research</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/665">doi: 10.3390/info15110665</a></p> <p>Authors: Maria Osipenko </p> <p>We propose a method for automatically extracting new trends and best practices from the recent literature on online learning, aligned with the learning cycle perspective. Using titles and abstracts of research articles published in high ranked educational journals, we assign topic proportions to the articles, where the topics are aligned with the components of the learning cycle: engagement, exploration, explanation, elaboration, evaluation, and evolution. The topic analysis is conducted using keyword-based Latent Dirichlet allocation, and the topic keywords are chosen to reflect the nature of the learning cycle components. Our analysis reveals the time dynamics of research topics aligned on learning cycle components, component weights, and interconnections between them in the current research focus. Connections between the topics and user-defined learning elements are discovered. Concretely, we examine how effective learning elements such as virtual reality, multimedia, gamification, and problem-based learning are related to the learning cycle components in the literature. In this way, any innovative learning strategy or learning element can be placed in the landscape of the learning cycle topics. The analysis can be helpful to other researches when designing effective learning activities that address particular components of the learning cycle.</p> ]]></content:encoded> <dc:title>Online Learning from the Learning Cycle Perspective: Discovering Patterns in Recent Research</dc:title> <dc:creator>Maria Osipenko</dc:creator> <dc:identifier>doi: 10.3390/info15110665</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-22</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-22</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>665</prism:startingPage> <prism:doi>10.3390/info15110665</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/665</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/664"> <title>Information, Vol. 15, Pages 664: Few-Shot Methods for Aspect-Level Sentiment Analysis</title> <link>https://www.mdpi.com/2078-2489/15/11/664</link> <description>In this paper, we explore the approaches to the problem of cross-domain few-shot classification of sentiment aspects. By cross-domain few-shot, we mean a setting where the model is trained on large data in one domain (for example, hotel reviews) and is intended to perform on another (for example, restaurant reviews) with only a few labelled examples in the target domain. We start with pre-trained monolingual language models. Using the Polish language dataset AspectEmo, we compare model training using standard gradient-based learning to a zero-shot approach and two dedicated few-shot methods: ProtoNet and NNShot. We find both dedicated methods much superior to both gradient learning and zero-shot setup, with a small advantage held by NNShot. Overall, we find few-shot to be a compelling alternative, achieving a surprising amount of performance compared to gradient training on full-size data.</description> <pubDate>2024-10-22</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 664: Few-Shot Methods for Aspect-Level Sentiment Analysis</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/664">doi: 10.3390/info15110664</a></p> <p>Authors: Aleksander Wawer </p> <p>In this paper, we explore the approaches to the problem of cross-domain few-shot classification of sentiment aspects. By cross-domain few-shot, we mean a setting where the model is trained on large data in one domain (for example, hotel reviews) and is intended to perform on another (for example, restaurant reviews) with only a few labelled examples in the target domain. We start with pre-trained monolingual language models. Using the Polish language dataset AspectEmo, we compare model training using standard gradient-based learning to a zero-shot approach and two dedicated few-shot methods: ProtoNet and NNShot. We find both dedicated methods much superior to both gradient learning and zero-shot setup, with a small advantage held by NNShot. Overall, we find few-shot to be a compelling alternative, achieving a surprising amount of performance compared to gradient training on full-size data.</p> ]]></content:encoded> <dc:title>Few-Shot Methods for Aspect-Level Sentiment Analysis</dc:title> <dc:creator>Aleksander Wawer</dc:creator> <dc:identifier>doi: 10.3390/info15110664</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-22</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-22</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>664</prism:startingPage> <prism:doi>10.3390/info15110664</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/664</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/663"> <title>Information, Vol. 15, Pages 663: Impact of Digital Innovations on Health Literacy Applied to Patients with Special Needs: A Systematic Review</title> <link>https://www.mdpi.com/2078-2489/15/11/663</link> <description>MHealth strategies have been used in various health areas, and mobile apps have been used in the context of health self-management. They can be considered an adjuvant intervention in oral health literacy, mainly for people with special health needs. Thus, the aim of this study was to identify the improvement of oral health literacy in patients with special needs when using digital platforms. A systematic literature review, based on the Joanna Briggs Institute (JBI) guidelines, was the main research method employed in this study. A search was undertaken in PubMed/MEDLINE and Cochrane Central Register of Controlled Trials (CENTRAL) databases, according to the relevant Mesh descriptors, their synonyms, and free terms (Entry Terms). Studies published between the years 2012 and 2023 were included. Two researchers independently assessed the quality of the included studies by completing the Newcastle&amp;ndash;Ottawa Quality Assessment Scale questionnaire. The analysis corpus comprised 5 articles among the 402 articles selected after applying the inclusion/exclusion criteria (k = 0.97). The evidence from the considered articles is consensual regarding the effectiveness of using new technologies and innovations in promoting oral health literacy in patients with special health needs. The interventions were based on using the Illustration Reinforcement Communication System, inspired by the Picture Exchange Communication System, Nintendo&amp;reg; Wii&amp;trade; TV, virtual reality, smartphones, with software applications to read messages sent, Audio Tactile Performance technique, and Art package. One study had a low-quality assessment, and four had a high quality. The evidence from the articles included in this systematic review is consistent regarding the effectiveness of using new technologies and innovations in promoting oral health literacy in patients with special health needs.</description> <pubDate>2024-10-22</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 663: Impact of Digital Innovations on Health Literacy Applied to Patients with Special Needs: A Systematic Review</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/663">doi: 10.3390/info15110663</a></p> <p>Authors: Lucilene Bustilho Cardoso Patr铆cia Couto Patr铆cia Correia Pedro C. Lopes Juliana Campos Hasse Fernandes Gustavo Vicentis Oliveira Fernandes N茅lio Jorge Veiga </p> <p>MHealth strategies have been used in various health areas, and mobile apps have been used in the context of health self-management. They can be considered an adjuvant intervention in oral health literacy, mainly for people with special health needs. Thus, the aim of this study was to identify the improvement of oral health literacy in patients with special needs when using digital platforms. A systematic literature review, based on the Joanna Briggs Institute (JBI) guidelines, was the main research method employed in this study. A search was undertaken in PubMed/MEDLINE and Cochrane Central Register of Controlled Trials (CENTRAL) databases, according to the relevant Mesh descriptors, their synonyms, and free terms (Entry Terms). Studies published between the years 2012 and 2023 were included. Two researchers independently assessed the quality of the included studies by completing the Newcastle&amp;ndash;Ottawa Quality Assessment Scale questionnaire. The analysis corpus comprised 5 articles among the 402 articles selected after applying the inclusion/exclusion criteria (k = 0.97). The evidence from the considered articles is consensual regarding the effectiveness of using new technologies and innovations in promoting oral health literacy in patients with special health needs. The interventions were based on using the Illustration Reinforcement Communication System, inspired by the Picture Exchange Communication System, Nintendo&amp;reg; Wii&amp;trade; TV, virtual reality, smartphones, with software applications to read messages sent, Audio Tactile Performance technique, and Art package. One study had a low-quality assessment, and four had a high quality. The evidence from the articles included in this systematic review is consistent regarding the effectiveness of using new technologies and innovations in promoting oral health literacy in patients with special health needs.</p> ]]></content:encoded> <dc:title>Impact of Digital Innovations on Health Literacy Applied to Patients with Special Needs: A Systematic Review</dc:title> <dc:creator>Lucilene Bustilho Cardoso</dc:creator> <dc:creator>Patr铆cia Couto</dc:creator> <dc:creator>Patr铆cia Correia</dc:creator> <dc:creator>Pedro C. Lopes</dc:creator> <dc:creator>Juliana Campos Hasse Fernandes</dc:creator> <dc:creator>Gustavo Vicentis Oliveira Fernandes</dc:creator> <dc:creator>N茅lio Jorge Veiga</dc:creator> <dc:identifier>doi: 10.3390/info15110663</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-22</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-22</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Review</prism:section> <prism:startingPage>663</prism:startingPage> <prism:doi>10.3390/info15110663</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/663</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/662"> <title>Information, Vol. 15, Pages 662: To What Extent Have LLMs Reshaped the Legal Domain So Far? A Scoping Literature Review</title> <link>https://www.mdpi.com/2078-2489/15/11/662</link> <description>Understanding and explaining legal systems is very challenging due to their complex structure, specialized terminology, and multiple interpretations. Legal AI models are currently undergoing drastic advancements due to the development of Large Language Models (LLMs) that have achieved state-of-the-art performance on a wide range of tasks and are currently undergoing very rapid iterations. As an emerging field, the application of LLMs in the legal field is still in its early stages, with multiple challenges that need to be addressed. Our objective is to provide a comprehensive survey of legal LLMs, not only reviewing the models themselves but also analyzing their applications within the legal systems in different geographies. The paper begins by providing a high-level overview of AI technologies in the legal field and showcasing recent research advancements in LLMs, followed by practical implementations of legal LLMs. Two databases (i.e., SCOPUS and Web of Science) were considered alongside additional related studies that met our selection criteria. We used the PRISMA for Scoping Reviews (PRISMA-ScR) guidelines as the methodology to extract relevant studies and report our findings. The paper discusses and analyses the limitations and challenges faced by legal LLMs, including issues related to data, algorithms, and judicial practices. Moreover, we examine the extent to which such systems can be effectively deployed. The paper summarizes recommendations and future directions to address challenges, aiming to help stakeholders overcome limitations and integrate legal LLMs into the judicial system.</description> <pubDate>2024-10-22</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 662: To What Extent Have LLMs Reshaped the Legal Domain So Far? A Scoping Literature Review</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/662">doi: 10.3390/info15110662</a></p> <p>Authors: Bogdan Padiu Radu Iacob Traian Rebedea Mihai Dascalu </p> <p>Understanding and explaining legal systems is very challenging due to their complex structure, specialized terminology, and multiple interpretations. Legal AI models are currently undergoing drastic advancements due to the development of Large Language Models (LLMs) that have achieved state-of-the-art performance on a wide range of tasks and are currently undergoing very rapid iterations. As an emerging field, the application of LLMs in the legal field is still in its early stages, with multiple challenges that need to be addressed. Our objective is to provide a comprehensive survey of legal LLMs, not only reviewing the models themselves but also analyzing their applications within the legal systems in different geographies. The paper begins by providing a high-level overview of AI technologies in the legal field and showcasing recent research advancements in LLMs, followed by practical implementations of legal LLMs. Two databases (i.e., SCOPUS and Web of Science) were considered alongside additional related studies that met our selection criteria. We used the PRISMA for Scoping Reviews (PRISMA-ScR) guidelines as the methodology to extract relevant studies and report our findings. The paper discusses and analyses the limitations and challenges faced by legal LLMs, including issues related to data, algorithms, and judicial practices. Moreover, we examine the extent to which such systems can be effectively deployed. The paper summarizes recommendations and future directions to address challenges, aiming to help stakeholders overcome limitations and integrate legal LLMs into the judicial system.</p> ]]></content:encoded> <dc:title>To What Extent Have LLMs Reshaped the Legal Domain So Far? A Scoping Literature Review</dc:title> <dc:creator>Bogdan Padiu</dc:creator> <dc:creator>Radu Iacob</dc:creator> <dc:creator>Traian Rebedea</dc:creator> <dc:creator>Mihai Dascalu</dc:creator> <dc:identifier>doi: 10.3390/info15110662</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-22</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-22</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Systematic Review</prism:section> <prism:startingPage>662</prism:startingPage> <prism:doi>10.3390/info15110662</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/662</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/11/661"> <title>Information, Vol. 15, Pages 661: A Context-Based Perspective on Frost Analysis in Reuse-Oriented Big Data-System Developments</title> <link>https://www.mdpi.com/2078-2489/15/11/661</link> <description>The large amount of available data, generated every second via sensors, social networks, organizations, and so on, has generated new lines of research that involve novel methods, techniques, resources, and/or technologies. The development of big data systems (BDSs) can be approached from different perspectives, all of them useful, depending on the objectives pursued. In particular, in this work, we address BDSs in the area of software engineering, contributing to the generation of novel methodologies and techniques for software reuse. In this article, we propose a methodology to develop reusable BDSs by mirroring activities from software product line engineering. This means that the process of building BDSs is approached by analyzing the variety of domain features and modeling them as a family of related assets. The contextual perspective of the proposal, along with its supporting tool, is introduced through a case study in the agrometeorology domain. The characterization of variables for frost analysis exemplifies the importance of identifying variety, as well as the possibility of reusing previous analyses adjusted to the profile of each case. In addition to showing interesting findings from the case, we also exemplify our concept of context variety, which is a core element in modeling reusable BDSs.</description> <pubDate>2024-10-22</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 661: A Context-Based Perspective on Frost Analysis in Reuse-Oriented Big Data-System Developments</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/11/661">doi: 10.3390/info15110661</a></p> <p>Authors: Agustina Buccella Alejandra Cechich Federico Saurin Ayel茅n Montenegro Andrea Rodr铆guez Angel Mu帽oz </p> <p>The large amount of available data, generated every second via sensors, social networks, organizations, and so on, has generated new lines of research that involve novel methods, techniques, resources, and/or technologies. The development of big data systems (BDSs) can be approached from different perspectives, all of them useful, depending on the objectives pursued. In particular, in this work, we address BDSs in the area of software engineering, contributing to the generation of novel methodologies and techniques for software reuse. In this article, we propose a methodology to develop reusable BDSs by mirroring activities from software product line engineering. This means that the process of building BDSs is approached by analyzing the variety of domain features and modeling them as a family of related assets. The contextual perspective of the proposal, along with its supporting tool, is introduced through a case study in the agrometeorology domain. The characterization of variables for frost analysis exemplifies the importance of identifying variety, as well as the possibility of reusing previous analyses adjusted to the profile of each case. In addition to showing interesting findings from the case, we also exemplify our concept of context variety, which is a core element in modeling reusable BDSs.</p> ]]></content:encoded> <dc:title>A Context-Based Perspective on Frost Analysis in Reuse-Oriented Big Data-System Developments</dc:title> <dc:creator>Agustina Buccella</dc:creator> <dc:creator>Alejandra Cechich</dc:creator> <dc:creator>Federico Saurin</dc:creator> <dc:creator>Ayel茅n Montenegro</dc:creator> <dc:creator>Andrea Rodr铆guez</dc:creator> <dc:creator>Angel Mu帽oz</dc:creator> <dc:identifier>doi: 10.3390/info15110661</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-22</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-22</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>11</prism:number> <prism:section>Article</prism:section> <prism:startingPage>661</prism:startingPage> <prism:doi>10.3390/info15110661</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/11/661</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/10/660"> <title>Information, Vol. 15, Pages 660: Recommender Systems Applications: Data Sources, Features, and Challenges</title> <link>https://www.mdpi.com/2078-2489/15/10/660</link> <description>In recent years, there has been growing interest in recommendation systems, which is matched by their widespread adoption across various sectors. This can be attributed to their effectiveness in reducing an avalanche of data into individualized information that is meaningful, relevant, and can easily be absorbed by a single person. Several studies have recently navigated the landscape of recommendation systems, attending to their approaches, challenges, and applications, as well as the evaluation metrics necessary for effective implementation. This systematic review investigates the understudied aspects of recommendation systems, including the data input into the systems and their features or outputs. The data in (input) and data out (features) are both diverse and vary significantly from not just one application domain to another, but also from one application use case to another, which is a distinction that has not been thoroughly addressed in the past. In addition, this study explores several application domains, providing a comprehensive breakdown of the categorical data consumed by these systems and the features, or outputs, of these systems. Without focusing on any particular journals or their rankings, this study collects and reviews articles on recommendation systems published from 2018 to April 2024, in four top-tier research repositories, including IEEE Xplore Digital Library, Springer Link, ACM Digital Library, and Google Scholar.</description> <pubDate>2024-10-21</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 660: Recommender Systems Applications: Data Sources, Features, and Challenges</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/10/660">doi: 10.3390/info15100660</a></p> <p>Authors: Yousef H. Alfaifi </p> <p>In recent years, there has been growing interest in recommendation systems, which is matched by their widespread adoption across various sectors. This can be attributed to their effectiveness in reducing an avalanche of data into individualized information that is meaningful, relevant, and can easily be absorbed by a single person. Several studies have recently navigated the landscape of recommendation systems, attending to their approaches, challenges, and applications, as well as the evaluation metrics necessary for effective implementation. This systematic review investigates the understudied aspects of recommendation systems, including the data input into the systems and their features or outputs. The data in (input) and data out (features) are both diverse and vary significantly from not just one application domain to another, but also from one application use case to another, which is a distinction that has not been thoroughly addressed in the past. In addition, this study explores several application domains, providing a comprehensive breakdown of the categorical data consumed by these systems and the features, or outputs, of these systems. Without focusing on any particular journals or their rankings, this study collects and reviews articles on recommendation systems published from 2018 to April 2024, in four top-tier research repositories, including IEEE Xplore Digital Library, Springer Link, ACM Digital Library, and Google Scholar.</p> ]]></content:encoded> <dc:title>Recommender Systems Applications: Data Sources, Features, and Challenges</dc:title> <dc:creator>Yousef H. Alfaifi</dc:creator> <dc:identifier>doi: 10.3390/info15100660</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-21</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-21</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>10</prism:number> <prism:section>Systematic Review</prism:section> <prism:startingPage>660</prism:startingPage> <prism:doi>10.3390/info15100660</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/10/660</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/10/659"> <title>Information, Vol. 15, Pages 659: Sentence Embeddings and Semantic Entity Extraction for Identification of Topics of Short Fact-Checked Claims</title> <link>https://www.mdpi.com/2078-2489/15/10/659</link> <description>The objective of this research was to design a method to assign topics to claims debunked by fact-checking agencies. During the fact-checking process, access to more structured knowledge is necessary; therefore, we aim to describe topics with semantic vocabulary. Classification of topics should go beyond simple connotations like instance-class and rather reflect broader phenomena that are recognized by fact checkers. The assignment of semantic entities is also crucial for the automatic verification of facts using the underlying knowledge graphs. Our method is based on sentence embeddings, various clustering methods (HDBSCAN, UMAP, K-means), semantic entity matching, and terms importance assessment based on TF-IDF. We represent our topics in semantic space using Wikidata Q-ids, DBpedia, Wikipedia topics, YAGO, and other relevant ontologies. Such an approach based on semantic entities also supports hierarchical navigation within topics. For evaluation, we compare topic modeling results with claims already tagged by fact checkers. The work presented in this paper is useful for researchers and practitioners interested in semantic topic modeling of fake news narratives.</description> <pubDate>2024-10-21</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 659: Sentence Embeddings and Semantic Entity Extraction for Identification of Topics of Short Fact-Checked Claims</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/10/659">doi: 10.3390/info15100659</a></p> <p>Authors: Krzysztof W臋cel Marcin Sawi艅ski W艂odzimierz Lewoniewski Milena Str贸偶yna Ewelina Ksi臋偶niak Witold Abramowicz </p> <p>The objective of this research was to design a method to assign topics to claims debunked by fact-checking agencies. During the fact-checking process, access to more structured knowledge is necessary; therefore, we aim to describe topics with semantic vocabulary. Classification of topics should go beyond simple connotations like instance-class and rather reflect broader phenomena that are recognized by fact checkers. The assignment of semantic entities is also crucial for the automatic verification of facts using the underlying knowledge graphs. Our method is based on sentence embeddings, various clustering methods (HDBSCAN, UMAP, K-means), semantic entity matching, and terms importance assessment based on TF-IDF. We represent our topics in semantic space using Wikidata Q-ids, DBpedia, Wikipedia topics, YAGO, and other relevant ontologies. Such an approach based on semantic entities also supports hierarchical navigation within topics. For evaluation, we compare topic modeling results with claims already tagged by fact checkers. The work presented in this paper is useful for researchers and practitioners interested in semantic topic modeling of fake news narratives.</p> ]]></content:encoded> <dc:title>Sentence Embeddings and Semantic Entity Extraction for Identification of Topics of Short Fact-Checked Claims</dc:title> <dc:creator>Krzysztof W臋cel</dc:creator> <dc:creator>Marcin Sawi艅ski</dc:creator> <dc:creator>W艂odzimierz Lewoniewski</dc:creator> <dc:creator>Milena Str贸偶yna</dc:creator> <dc:creator>Ewelina Ksi臋偶niak</dc:creator> <dc:creator>Witold Abramowicz</dc:creator> <dc:identifier>doi: 10.3390/info15100659</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-21</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-21</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>10</prism:number> <prism:section>Article</prism:section> <prism:startingPage>659</prism:startingPage> <prism:doi>10.3390/info15100659</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/10/659</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/10/658"> <title>Information, Vol. 15, Pages 658: Android Malware Detection Using Support Vector Regression for Dynamic Feature Analysis</title> <link>https://www.mdpi.com/2078-2489/15/10/658</link> <description>Mobile devices face significant security challenges due to the increasing proliferation of Android malware. This study introduces an innovative approach to Android malware detection, combining Support Vector Regression (SVR) and dynamic feature analysis to address escalating mobile security challenges. Our research aimed to develop a more accurate and reliable malware detection system capable of identifying both known and novel malware variants. We implemented a comprehensive methodology encompassing dynamic feature extraction from Android applications, feature preprocessing and normalization, and the application of SVR with a Radial Basis Function (RBF) kernel for malware classification. Our results demonstrate the SVR-based model&amp;rsquo;s superior performance, achieving 95.74% accuracy, 94.76% precision, 98.06% recall, and a 96.38% F1-score, outperforming benchmark algorithms including SVM, Random Forest, and CNN. The model exhibited excellent discriminative ability with an Area Under the Curve (AUC) of 0.98 in ROC analysis. The proposed model&amp;rsquo;s capacity to capture complex, non-linear relationships in the feature space significantly enhanced its effectiveness in distinguishing between benign and malicious applications. This research provides a robust foundation for advancing Android malware detection systems, offering valuable insights for researchers and security practitioners in addressing evolving malware challenges.</description> <pubDate>2024-10-19</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 658: Android Malware Detection Using Support Vector Regression for Dynamic Feature Analysis</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/10/658">doi: 10.3390/info15100658</a></p> <p>Authors: Nahier Aldhafferi </p> <p>Mobile devices face significant security challenges due to the increasing proliferation of Android malware. This study introduces an innovative approach to Android malware detection, combining Support Vector Regression (SVR) and dynamic feature analysis to address escalating mobile security challenges. Our research aimed to develop a more accurate and reliable malware detection system capable of identifying both known and novel malware variants. We implemented a comprehensive methodology encompassing dynamic feature extraction from Android applications, feature preprocessing and normalization, and the application of SVR with a Radial Basis Function (RBF) kernel for malware classification. Our results demonstrate the SVR-based model&amp;rsquo;s superior performance, achieving 95.74% accuracy, 94.76% precision, 98.06% recall, and a 96.38% F1-score, outperforming benchmark algorithms including SVM, Random Forest, and CNN. The model exhibited excellent discriminative ability with an Area Under the Curve (AUC) of 0.98 in ROC analysis. The proposed model&amp;rsquo;s capacity to capture complex, non-linear relationships in the feature space significantly enhanced its effectiveness in distinguishing between benign and malicious applications. This research provides a robust foundation for advancing Android malware detection systems, offering valuable insights for researchers and security practitioners in addressing evolving malware challenges.</p> ]]></content:encoded> <dc:title>Android Malware Detection Using Support Vector Regression for Dynamic Feature Analysis</dc:title> <dc:creator>Nahier Aldhafferi</dc:creator> <dc:identifier>doi: 10.3390/info15100658</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-19</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-19</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>10</prism:number> <prism:section>Article</prism:section> <prism:startingPage>658</prism:startingPage> <prism:doi>10.3390/info15100658</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/10/658</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/10/657"> <title>Information, Vol. 15, Pages 657: An Intelligent Approach to Automated Operating Systems Log Analysis for Enhanced Security</title> <link>https://www.mdpi.com/2078-2489/15/10/657</link> <description>Self-healing systems have become essential in modern computing for ensuring continuous and secure operations while minimising downtime and maintenance costs. These systems autonomously detect, diagnose, and correct anomalies, with effective self-healing relying on accurate interpretation of system logs generated by operating systems (OSs). Manual analysis of these logs in complex environments is often cumbersome, time-consuming, and error-prone, highlighting the need for automated, reliable log analysis methods. Our research introduces an intelligent methodology for creating self-healing systems for multiple OSs, focusing on log classification using CountVectorizer and the Multinomial Naive Bayes algorithm. This approach involves preprocessing OS logs to ensure quality, converting them into a numerical format with CountVectorizer, and then classifying them using the Naive Bayes algorithm. The system classifies multiple OS logs into distinct categories, identifying errors and warnings. We tested our model on logs from four major OSs; Mac, Android, Linux, and Windows; sourced from Zenodo to simulate real-world scenarios. The model&amp;rsquo;s accuracy, precision, and reliability were evaluated, demonstrating its potential for deployment in practical self-healing systems.</description> <pubDate>2024-10-19</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 657: An Intelligent Approach to Automated Operating Systems Log Analysis for Enhanced Security</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/10/657">doi: 10.3390/info15100657</a></p> <p>Authors: Obinna Johnphill Ali Safaa Sadiq Omprakash Kaiwartya Mohammad Aljaidi </p> <p>Self-healing systems have become essential in modern computing for ensuring continuous and secure operations while minimising downtime and maintenance costs. These systems autonomously detect, diagnose, and correct anomalies, with effective self-healing relying on accurate interpretation of system logs generated by operating systems (OSs). Manual analysis of these logs in complex environments is often cumbersome, time-consuming, and error-prone, highlighting the need for automated, reliable log analysis methods. Our research introduces an intelligent methodology for creating self-healing systems for multiple OSs, focusing on log classification using CountVectorizer and the Multinomial Naive Bayes algorithm. This approach involves preprocessing OS logs to ensure quality, converting them into a numerical format with CountVectorizer, and then classifying them using the Naive Bayes algorithm. The system classifies multiple OS logs into distinct categories, identifying errors and warnings. We tested our model on logs from four major OSs; Mac, Android, Linux, and Windows; sourced from Zenodo to simulate real-world scenarios. The model&amp;rsquo;s accuracy, precision, and reliability were evaluated, demonstrating its potential for deployment in practical self-healing systems.</p> ]]></content:encoded> <dc:title>An Intelligent Approach to Automated Operating Systems Log Analysis for Enhanced Security</dc:title> <dc:creator>Obinna Johnphill</dc:creator> <dc:creator>Ali Safaa Sadiq</dc:creator> <dc:creator>Omprakash Kaiwartya</dc:creator> <dc:creator>Mohammad Aljaidi</dc:creator> <dc:identifier>doi: 10.3390/info15100657</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-19</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-19</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>10</prism:number> <prism:section>Article</prism:section> <prism:startingPage>657</prism:startingPage> <prism:doi>10.3390/info15100657</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/10/657</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/10/655"> <title>Information, Vol. 15, Pages 655: MRI Super-Resolution Analysis via MRISR: Deep Learning for Low-Field Imaging</title> <link>https://www.mdpi.com/2078-2489/15/10/655</link> <description>This paper presents a novel MRI super-resolution analysis model, MRISR. Through the utilization of generative adversarial networks for the estimation of degradation kernels and the injection of noise, we have constructed a comprehensive dataset of high-quality paired high- and low-resolution MRI images. The MRISR model seamlessly integrates VMamba and Transformer technologies, demonstrating superior performance across various no-reference image quality assessment metrics compared with existing methodologies. It effectively reconstructs high-resolution MRI images while meticulously preserving intricate texture details, achieving a fourfold enhancement in resolution. This research endeavor represents a significant advancement in the field of MRI super-resolution analysis, contributing a cost-effective solution for rapid MRI technology that holds immense promise for widespread adoption in clinical diagnostic applications.</description> <pubDate>2024-10-19</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 655: MRI Super-Resolution Analysis via MRISR: Deep Learning for Low-Field Imaging</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/10/655">doi: 10.3390/info15100655</a></p> <p>Authors: Yunhe Li Mei Yang Tao Bian Haitao Wu </p> <p>This paper presents a novel MRI super-resolution analysis model, MRISR. Through the utilization of generative adversarial networks for the estimation of degradation kernels and the injection of noise, we have constructed a comprehensive dataset of high-quality paired high- and low-resolution MRI images. The MRISR model seamlessly integrates VMamba and Transformer technologies, demonstrating superior performance across various no-reference image quality assessment metrics compared with existing methodologies. It effectively reconstructs high-resolution MRI images while meticulously preserving intricate texture details, achieving a fourfold enhancement in resolution. This research endeavor represents a significant advancement in the field of MRI super-resolution analysis, contributing a cost-effective solution for rapid MRI technology that holds immense promise for widespread adoption in clinical diagnostic applications.</p> ]]></content:encoded> <dc:title>MRI Super-Resolution Analysis via MRISR: Deep Learning for Low-Field Imaging</dc:title> <dc:creator>Yunhe Li</dc:creator> <dc:creator>Mei Yang</dc:creator> <dc:creator>Tao Bian</dc:creator> <dc:creator>Haitao Wu</dc:creator> <dc:identifier>doi: 10.3390/info15100655</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-19</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-19</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>10</prism:number> <prism:section>Article</prism:section> <prism:startingPage>655</prism:startingPage> <prism:doi>10.3390/info15100655</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/10/655</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <item rdf:about="https://www.mdpi.com/2078-2489/15/10/656"> <title>Information, Vol. 15, Pages 656: Fuzzy Logic Concepts, Developments and Implementation</title> <link>https://www.mdpi.com/2078-2489/15/10/656</link> <description>Over the past few decades, the field of fuzzy logic has evolved significantly, leading to the development of diverse techniques and applications. Fuzzy logic has been successfully combined with other artificial intelligence techniques such as artificial neural networks, deep learning, robotics, and genetic algorithms, creating powerful tools for complex problem-solving applications. This article provides an informative description of some of the main concepts in the field of fuzzy logic. These include the types and roles of membership functions, fuzzy inference system (FIS), adaptive neuro-fuzzy inference system and fuzzy c-means clustering. The processes of fuzzification, defuzzification, implication, and determining fuzzy rules&amp;rsquo; firing strengths are described. The article outlines some recent developments in the field of fuzzy logic, including its applications for decision support, industrial processes and control, data and telecommunication, and image and signal processing. Approaches to implementing fuzzy logic models are explained and, as an illustration, Matlab (version R2024b) is used to demonstrate implementation of a FIS. The prospects for future fuzzy logic developments are explored and example applications of hybrid fuzzy logic systems are provided. There remain extensive opportunities in further developing fuzzy logic-based techniques, including their further integration with various machine learning algorithms, and their adaptation into consumer products and industrial processes.</description> <pubDate>2024-10-19</pubDate> <content:encoded><![CDATA[ <p><b>Information, Vol. 15, Pages 656: Fuzzy Logic Concepts, Developments and Implementation</b></p> <p>Information <a href="https://www.mdpi.com/2078-2489/15/10/656">doi: 10.3390/info15100656</a></p> <p>Authors: Reza Saatchi </p> <p>Over the past few decades, the field of fuzzy logic has evolved significantly, leading to the development of diverse techniques and applications. Fuzzy logic has been successfully combined with other artificial intelligence techniques such as artificial neural networks, deep learning, robotics, and genetic algorithms, creating powerful tools for complex problem-solving applications. This article provides an informative description of some of the main concepts in the field of fuzzy logic. These include the types and roles of membership functions, fuzzy inference system (FIS), adaptive neuro-fuzzy inference system and fuzzy c-means clustering. The processes of fuzzification, defuzzification, implication, and determining fuzzy rules&amp;rsquo; firing strengths are described. The article outlines some recent developments in the field of fuzzy logic, including its applications for decision support, industrial processes and control, data and telecommunication, and image and signal processing. Approaches to implementing fuzzy logic models are explained and, as an illustration, Matlab (version R2024b) is used to demonstrate implementation of a FIS. The prospects for future fuzzy logic developments are explored and example applications of hybrid fuzzy logic systems are provided. There remain extensive opportunities in further developing fuzzy logic-based techniques, including their further integration with various machine learning algorithms, and their adaptation into consumer products and industrial processes.</p> ]]></content:encoded> <dc:title>Fuzzy Logic Concepts, Developments and Implementation</dc:title> <dc:creator>Reza Saatchi</dc:creator> <dc:identifier>doi: 10.3390/info15100656</dc:identifier> <dc:source>Information</dc:source> <dc:date>2024-10-19</dc:date> <prism:publicationName>Information</prism:publicationName> <prism:publicationDate>2024-10-19</prism:publicationDate> <prism:volume>15</prism:volume> <prism:number>10</prism:number> <prism:section>Article</prism:section> <prism:startingPage>656</prism:startingPage> <prism:doi>10.3390/info15100656</prism:doi> <prism:url>https://www.mdpi.com/2078-2489/15/10/656</prism:url> <cc:license rdf:resource="CC BY 4.0"/> </item> <cc:License rdf:about="https://creativecommons.org/licenses/by/4.0/"> <cc:permits rdf:resource="https://creativecommons.org/ns#Reproduction" /> <cc:permits rdf:resource="https://creativecommons.org/ns#Distribution" /> <cc:permits rdf:resource="https://creativecommons.org/ns#DerivativeWorks" /> </cc:License> </rdf:RDF>