CINXE.COM
Search results for: temporal features
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: temporal features</title> <meta name="description" content="Search results for: temporal features"> <meta name="keywords" content="temporal features"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="temporal features" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="temporal features"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4836</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: temporal features</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4836</span> Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiang%20Yu">Xiang Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chuntao%20Feng"> Chuntao Feng</a>, <a href="https://publications.waset.org/abstracts/search?q=Lu%20Yang"> Lu Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Meiyang%20Song"> Meiyang Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenhao%20Zhou"> Wenhao Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FMCW%20radar" title="FMCW radar">FMCW radar</a>, <a href="https://publications.waset.org/abstracts/search?q=fall%20detection" title=" fall detection"> fall detection</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20CNN" title=" 3D CNN"> 3D CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=time-varying%20range-doppler%20features" title=" time-varying range-doppler features"> time-varying range-doppler features</a> </p> <a href="https://publications.waset.org/abstracts/150637/frequency-modulation-continuous-wave-radar-human-fall-detection-based-on-time-varying-range-doppler-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150637.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4835</span> Using New Machine Algorithms to Classify Iranian Musical Instruments According to Temporal, Spectral and Coefficient Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ronak%20Khosravi">Ronak Khosravi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmood%20Abbasi%20Layegh"> Mahmood Abbasi Layegh</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamak%20Haghipour"> Siamak Haghipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Avin%20Esmaili"> Avin Esmaili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a study on classification of musical woodwind instruments using a small set of features selected from a broad range of extracted ones by the sequential forward selection method was carried out. Firstly, we extract 42 features for each record in the music database of 402 sound files belonging to five different groups of Flutes (end blown and internal duct), Single –reed, Double –reed (exposed and capped), Triple reed and Quadruple reed. Then, the sequential forward selection method is adopted to choose the best feature set in order to achieve very high classification accuracy. Two different classification techniques of support vector machines and relevance vector machines have been tested out and an accuracy of up to 96% can be achieved by using 21 time, frequency and coefficient features and relevance vector machine with the Gaussian kernel function. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coefficient%20features" title="coefficient features">coefficient features</a>, <a href="https://publications.waset.org/abstracts/search?q=relevance%20vector%20machines" title=" relevance vector machines"> relevance vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20features" title=" spectral features"> spectral features</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20features" title=" temporal features"> temporal features</a> </p> <a href="https://publications.waset.org/abstracts/54321/using-new-machine-algorithms-to-classify-iranian-musical-instruments-according-to-temporal-spectral-and-coefficient-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54321.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4834</span> Speech Emotion Recognition with Bi-GRU and Self-Attention based Feature Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bubai%20Maji">Bubai Maji</a>, <a href="https://publications.waset.org/abstracts/search?q=Monorama%20Swain"> Monorama Swain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech is considered an essential and most natural medium for the interaction between machines and humans. However, extracting effective features for speech emotion recognition (SER) is remains challenging. The present studies show that the temporal information captured but high-level temporal-feature learning is yet to be investigated. In this paper, we present an efficient novel method using the Self-attention (SA) mechanism in a combination of Convolutional Neural Network (CNN) and Bi-directional Gated Recurrent Unit (Bi-GRU) network to learn high-level temporal-feature. In order to further enhance the representation of the high-level temporal-feature, we integrate a Bi-GRU output with learnable weights features by SA, and improve the performance. We evaluate our proposed method on our created SITB-OSED and IEMOCAP databases. We report that the experimental results of our proposed method achieve state-of-the-art performance on both databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bi-GRU" title="Bi-GRU">Bi-GRU</a>, <a href="https://publications.waset.org/abstracts/search?q=1D-CNNs" title=" 1D-CNNs"> 1D-CNNs</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a> </p> <a href="https://publications.waset.org/abstracts/148332/speech-emotion-recognition-with-bi-gru-and-self-attention-based-feature-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">113</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4833</span> Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Arabi">Mohammad Arabi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electric%20motor" title="electric motor">electric motor</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20detection" title=" fault detection"> fault detection</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20features" title=" frequency features"> frequency features</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20features" title=" temporal features"> temporal features</a> </p> <a href="https://publications.waset.org/abstracts/186563/utilizing-temporal-and-frequency-features-in-fault-detection-of-electric-motor-bearings-with-advanced-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186563.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">47</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4832</span> Perceptual Organization within Temporal Displacement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michele%20Sinico">Michele Sinico</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The psychological present has an actual extension. When a sequence of instantaneous stimuli falls in this short interval of time, observers perceive a compresence of events in succession and the temporal order depends on the qualitative relationships between the perceptual properties of the events. Two experiments were carried out to study the influence of perceptual grouping, with and without temporal displacement, on the duration of auditory sequences. The psychophysical method of adjustment was adopted. The first experiment investigated the effect of temporal displacement of a white noise on sequence duration. The second experiment investigated the effect of temporal displacement, along the pitch dimension, on temporal shortening of sequence. The results suggest that the temporal order of sounds, in the case of temporal displacement, is organized along the pitch dimension. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=time%20perception" title="time perception">time perception</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20present" title=" perceptual present"> perceptual present</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20displacement" title=" temporal displacement"> temporal displacement</a>, <a href="https://publications.waset.org/abstracts/search?q=Gestalt%20laws%20of%20perceptual%20organization" title=" Gestalt laws of perceptual organization"> Gestalt laws of perceptual organization</a> </p> <a href="https://publications.waset.org/abstracts/76211/perceptual-organization-within-temporal-displacement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76211.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4831</span> Attention-Based Spatio-Temporal Approach for Fire and Smoke Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alireza%20Mirrashid">Alireza Mirrashid</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Khoshbin"> Mohammad Khoshbin</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Atghaei"> Ali Atghaei</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Shahbazi"> Hassan Shahbazi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In various industries, smoke and fire are two of the most important threats in the workplace. One of the common methods for detecting smoke and fire is the use of infrared thermal and smoke sensors, which cannot be used in outdoor applications. Therefore, the use of vision-based methods seems necessary. The problem of smoke and fire detection is spatiotemporal and requires spatiotemporal solutions. This paper presents a method that uses spatial features along with temporal-based features to detect smoke and fire in the scene. It consists of three main parts; the task of each part is to reduce the error of the previous part so that the final model has a robust performance. This method also uses transformer modules to increase the accuracy of the model. The results of our model show the proper performance of the proposed approach in solving the problem of smoke and fire detection and can be used to increase workplace safety. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention" title="attention">attention</a>, <a href="https://publications.waset.org/abstracts/search?q=fire%20detection" title=" fire detection"> fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=smoke%20detection" title=" smoke detection"> smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal" title=" spatio-temporal"> spatio-temporal</a> </p> <a href="https://publications.waset.org/abstracts/153248/attention-based-spatio-temporal-approach-for-fire-and-smoke-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153248.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">203</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4830</span> Reconsidering Taylor’s Law with Chaotic Population Dynamical Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuzuru%20Mitsui">Yuzuru Mitsui</a>, <a href="https://publications.waset.org/abstracts/search?q=Takashi%20Ikegami"> Takashi Ikegami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The exponents of Taylor’s law in deterministic chaotic systems are computed, and their meanings are intensively discussed. Taylor’s law is the scaling relationship between the mean and variance (in both space and time) of population abundance, and this law is known to hold in a variety of ecological time series. The exponents found in the temporal Taylor’s law are different from those of the spatial Taylor’s law. The temporal Taylor’s law is calculated on the time series from the same locations (or the same initial states) of different temporal phases. However, with the spatial Taylor’s law, the mean and variance are calculated from the same temporal phase sampled from different places. Most previous studies were done with stochastic models, but we computed the temporal and spatial Taylor’s law in deterministic systems. The temporal Taylor’s law evaluated using the same initial state, and the spatial Taylor’s law was evaluated using the ensemble average and variance. There were two main discoveries from this work. First, it is often stated that deterministic systems tend to have the value two for Taylor’s exponent. However, most of the calculated exponents here were not two. Second, we investigated the relationships between chaotic features measured by the Lyapunov exponent, the correlation dimension, and other indexes with Taylor’s exponents. No strong correlations were found; however, there is some relationship in the same model, but with different parameter values, and we will discuss the meaning of those results at the end of this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chaos" title="chaos">chaos</a>, <a href="https://publications.waset.org/abstracts/search?q=density%20effect" title=" density effect"> density effect</a>, <a href="https://publications.waset.org/abstracts/search?q=population%20dynamics" title=" population dynamics"> population dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=Taylor%E2%80%99s%20law" title=" Taylor’s law"> Taylor’s law</a> </p> <a href="https://publications.waset.org/abstracts/109945/reconsidering-taylors-law-with-chaotic-population-dynamical-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109945.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4829</span> A Temporal QoS Ontology For ERTMS/ETCS</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marc%20Sango">Marc Sango</a>, <a href="https://publications.waset.org/abstracts/search?q=Olimpia%20Hoinaru"> Olimpia Hoinaru</a>, <a href="https://publications.waset.org/abstracts/search?q=Christophe%20Gransart"> Christophe Gransart</a>, <a href="https://publications.waset.org/abstracts/search?q=Laurence%20Duchien"> Laurence Duchien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. Indeed, a user operation, such as adding a new constraint on existing planning constraints can cause temporal inconsistencies, which can lead to system failures. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are given. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=system%20requirement%20specification" title="system requirement specification">system requirement specification</a>, <a href="https://publications.waset.org/abstracts/search?q=ERTMS%2FETCS" title=" ERTMS/ETCS"> ERTMS/ETCS</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20ontologies" title=" temporal ontologies"> temporal ontologies</a>, <a href="https://publications.waset.org/abstracts/search?q=domain%20ontologies" title=" domain ontologies"> domain ontologies</a> </p> <a href="https://publications.waset.org/abstracts/20625/a-temporal-qos-ontology-for-ertmsetcs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">422</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4828</span> Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daleesha%20M.%20Viswanathan">Daleesha M. Viswanathan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumam%20Mary%20Idicula"> Sumam Mary Idicula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=orientation%20features" title="orientation features">orientation features</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20feature%20vector" title=" discrete feature vector"> discrete feature vector</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM." title=" HMM."> HMM.</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a> </p> <a href="https://publications.waset.org/abstracts/35653/hand-motion-trajectory-analysis-for-dynamic-hand-gestures-used-in-indian-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4827</span> Multi-Temporal Cloud Detection and Removal in Satellite Imagery for Land Resources Investigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Feng%20Yin">Feng Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clouds are inevitable contaminants in optical satellite imagery, and prevent the satellite imaging systems from acquiring clear view of the earth surface. The presence of clouds in satellite imagery bring negative influences for remote sensing land resources investigation. As a consequence, detecting the locations of clouds in satellite imagery is an essential preprocessing step, and further remove the existing clouds is crucial for the application of imagery. In this paper, a multi-temporal based satellite imagery cloud detection and removal method is proposed, which will be used for large-scale land resource investigation. The proposed method is mainly composed of four steps. First, cloud masks are generated for cloud contaminated images by single temporal cloud detection based on multiple spectral features. Then, a cloud-free reference image of target areas is synthesized by weighted averaging time-series images in which cloud pixels are ignored. Thirdly, the refined cloud detection results are acquired by multi-temporal analysis based on the reference image. Finally, detected clouds are removed via multi-temporal linear regression. The results of a case application in Hubei province indicate that the proposed multi-temporal cloud detection and removal method is effective and promising for large-scale land resource investigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cloud%20detection" title="cloud detection">cloud detection</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud%20remove" title=" cloud remove"> cloud remove</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-temporal%20imagery" title=" multi-temporal imagery"> multi-temporal imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20resources%20investigation" title=" land resources investigation"> land resources investigation</a> </p> <a href="https://publications.waset.org/abstracts/90359/multi-temporal-cloud-detection-and-removal-in-satellite-imagery-for-land-resources-investigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4826</span> A Network of Nouns and Their Features :A Neurocomputational Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Skiker%20Kaoutar">Skiker Kaoutar</a>, <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Maouene"> Mounir Maouene </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neuroimaging studies indicate that a large fronto-parieto-temporal network support nouns and their features, with some areas store semantic knowledge (visual, auditory, olfactory, gustatory,…), other areas store lexical representation and other areas are implicated in general semantic processing. However, it is not well understood how this fronto-parieto-temporal network can be modulated by different semantic tasks and different semantic relations between nouns. In this study, we combine a behavioral semantic network, functional MRI studies involving object’s related nouns and brain network studies to explain how different semantic tasks and different semantic relations between nouns can modulate the activity within the brain network of nouns and their features. We first describe how nouns and their features form a large scale brain network. For this end, we examine the connectivities between areas recruited during the processing of nouns to know which configurations of interaction areas are possible. We can thus identify if, for example, brain areas that store semantic knowledge communicate via functional/structural links with areas that store lexical representations. Second, we examine how this network is modulated by different semantic tasks involving nouns and finally, we examine how category specific activation may result from the semantic relations among nouns. The results indicate that brain network of nouns and their features is highly modulated and flexible by different semantic tasks and semantic relations. At the end, this study can be used as a guide to help neurosientifics to interpret the pattern of fMRI activations detected in the semantic processing of nouns. Specifically; this study can help to interpret the category specific activations observed extensively in a large number of neuroimaging studies and clinical studies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nouns" title="nouns">nouns</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=network" title=" network"> network</a>, <a href="https://publications.waset.org/abstracts/search?q=category%20specificity" title=" category specificity"> category specificity</a> </p> <a href="https://publications.waset.org/abstracts/18889/a-network-of-nouns-and-their-features-a-neurocomputational-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">521</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4825</span> Multi-scale Spatial and Unified Temporal Feature-fusion Network for Multivariate Time Series Anomaly Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hang%20Yang">Hang Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jichao%20Li"> Jichao Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Kewei%20Yang"> Kewei Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tianyang%20Lei"> Tianyang Lei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multivariate time series anomaly detection is a significant research topic in the field of data mining, encompassing a wide range of applications across various industrial sectors such as traffic roads, financial logistics, and corporate production. The inherent spatial dependencies and temporal characteristics present in multivariate time series introduce challenges to the anomaly detection task. Previous studies have typically been based on the assumption that all variables belong to the same spatial hierarchy, neglecting the multi-level spatial relationships. To address this challenge, this paper proposes a multi-scale spatial and unified temporal feature fusion network, denoted as MSUT-Net, for multivariate time series anomaly detection. The proposed model employs a multi-level modeling approach, incorporating both temporal and spatial modules. The spatial module is designed to capture the spatial characteristics of multivariate time series data, utilizing an adaptive graph structure learning model to identify the multi-level spatial relationships between data variables and their attributes. The temporal module consists of a unified temporal processing module, which is tasked with capturing the temporal features of multivariate time series. This module is capable of simultaneously identifying temporal dependencies among different variables. Extensive testing on multiple publicly available datasets confirms that MSUT-Net achieves superior performance on the majority of datasets. Our method is able to model and accurately detect systems data with multi-level spatial relationships from a spatial-temporal perspective, providing a novel perspective for anomaly detection analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20system" title=" industrial system"> industrial system</a>, <a href="https://publications.waset.org/abstracts/search?q=multivariate%20time%20series" title=" multivariate time series"> multivariate time series</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a> </p> <a href="https://publications.waset.org/abstracts/193205/multi-scale-spatial-and-unified-temporal-feature-fusion-network-for-multivariate-time-series-anomaly-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193205.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">15</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4824</span> Spatio-Temporal Data Mining with Association Rules for Lake Van</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tolga%20Aydin">Tolga Aydin</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Fatih%20Alaeddino%C4%9Flu"> M. Fatih Alaeddinoğlu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=apriori%20algorithm" title="apriori algorithm">apriori algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=association%20rules" title=" association rules"> association rules</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20data" title=" spatio-temporal data"> spatio-temporal data</a> </p> <a href="https://publications.waset.org/abstracts/31190/spatio-temporal-data-mining-with-association-rules-for-lake-van" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31190.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">374</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4823</span> Temporal Case-Based Reasoning System for Automatic Parking Complex</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alexander%20P.%20Eremeev">Alexander P. Eremeev</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivan%20E.%20Kurilenko"> Ivan E. Kurilenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Pavel%20R.%20Varshavskiy"> Pavel R. Varshavskiy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of the application of temporal reasoning and case-based reasoning in intelligent decision support systems is considered. The method of case-based reasoning with temporal dependences for the solution of problems of real-time diagnostics and forecasting in intelligent decision support systems is described. This paper demonstrates how the temporal case-based reasoning system can be used in intelligent decision support systems of the car access control. This work was supported by RFBR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=analogous%20reasoning" title="analogous reasoning">analogous reasoning</a>, <a href="https://publications.waset.org/abstracts/search?q=case-based%20reasoning" title=" case-based reasoning"> case-based reasoning</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20decision%20support%20systems" title=" intelligent decision support systems"> intelligent decision support systems</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20reasoning" title=" temporal reasoning"> temporal reasoning</a> </p> <a href="https://publications.waset.org/abstracts/21478/temporal-case-based-reasoning-system-for-automatic-parking-complex" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21478.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4822</span> Leveraging the Power of Dual Spatial-Temporal Data Scheme for Traffic Prediction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Zhou">Yang Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Heli%20Sun"> Heli Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianbin%20Huang"> Jianbin Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jizhong%20Zhao"> Jizhong Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaojie%20Qiao"> Shaojie Qiao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic prediction is a fundamental problem in urban environment, facilitating the smart management of various businesses, such as taxi dispatching, bike relocation, and stampede alert. Most earlier methods rely on identifying the intrinsic spatial-temporal correlation to forecast. However, the complex nature of this problem entails a more sophisticated solution that can simultaneously capture the mutual influence of both adjacent and far-flung areas, with the information of time-dimension also incorporated seamlessly. To tackle this difficulty, we propose a new multi-phase architecture, DSTDS (Dual Spatial-Temporal Data Scheme for traffic prediction), that aims to reveal the underlying relationship that determines future traffic trend. First, a graph-based neural network with an attention mechanism is devised to obtain the static features of the road network. Then, a multi-granularity recurrent neural network is built in conjunction with the knowledge from a grid-based model. Subsequently, the preceding output is fed into a spatial-temporal super-resolution module. With this 3-phase structure, we carry out extensive experiments on several real-world datasets to demonstrate the effectiveness of our approach, which surpasses several state-of-the-art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20prediction" title="traffic prediction">traffic prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial-temporal" title=" spatial-temporal"> spatial-temporal</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20network" title=" recurrent neural network"> recurrent neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=dual%20data%20scheme" title=" dual data scheme"> dual data scheme</a> </p> <a href="https://publications.waset.org/abstracts/150299/leveraging-the-power-of-dual-spatial-temporal-data-scheme-for-traffic-prediction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150299.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4821</span> Temporal Characteristics of Human Perception to Significant Variation of Block Structures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kuo-Cheng%20Liu">Kuo-Cheng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the latest research efforts, the structures of the image in the spatial domain have been successfully analyzed and proved to deduce the visual masking for accurately estimating the visibility thresholds of the image. If the structural properties of the video sequence in the temporal domain are taken into account to estimate the temporal masking, the improvement and enhancement of the as-sessing spatio-temporal visibility thresholds are reasonably expected. In this paper, the temporal characteristics of human perception to the change in block structures on the time axis are analyzed. The temporal characteristics of human perception are represented in terms of the significant variation in block structures for the analysis of human visual system (HVS). Herein, the block structure in each frame is computed by combined the pattern masking and the contrast masking simultaneously. The contrast masking always overestimates the visibility thresholds of edge regions and underestimates that of texture regions, while the pattern masking is weak on a uniform background and is strong on the complex background with spatial patterns. Under considering the significant variation of block structures between successive frames, we extend the block structures of images in the spatial domain to that of video sequences in the temporal domain to analyze the relation between the inter-frame variation of structures and the temporal masking. Meanwhile, the subjective viewing test and the fair rating process are designed to evaluate the consistency of the temporal characteristics with the HVS under a specified viewing condition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20characteristic" title="temporal characteristic">temporal characteristic</a>, <a href="https://publications.waset.org/abstracts/search?q=block%20structure" title=" block structure"> block structure</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20masking" title=" pattern masking"> pattern masking</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20masking" title=" contrast masking"> contrast masking</a> </p> <a href="https://publications.waset.org/abstracts/35248/temporal-characteristics-of-human-perception-to-significant-variation-of-block-structures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35248.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4820</span> The Impact of Recurring Events in Fake News Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Raza">Ali Raza</a>, <a href="https://publications.waset.org/abstracts/search?q=Shafiq%20Ur%20Rehman%20Khan"> Shafiq Ur Rehman Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Raja%20Sher%20Afgun%20Usmani"> Raja Sher Afgun Usmani</a>, <a href="https://publications.waset.org/abstracts/search?q=Asif%20Raza"> Asif Raza</a>, <a href="https://publications.waset.org/abstracts/search?q=Basit%20Umair"> Basit Umair</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection of Fake news and missing information is gaining popularity, especially after the advancement in social media and online news platforms. Social media platforms are the main and speediest source of fake news propagation, whereas online news websites contribute to fake news dissipation. In this study, we propose a framework to detect fake news using the temporal features of text and consider user feedback to identify whether the news is fake or not. In recent studies, the temporal features in text documents gain valuable consideration from Natural Language Processing and user feedback and only try to classify the textual data as fake or true. This research article indicates the impact of recurring and non-recurring events on fake and true news. We use two models BERT and Bi-LSTM to investigate, and it is concluded from BERT we get better results and 70% of true news are recurring and rest of 30% are non-recurring. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title="natural language processing">natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=fake%20news%20detection" title=" fake news detection"> fake news detection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Bi-LSTM" title=" Bi-LSTM"> Bi-LSTM</a> </p> <a href="https://publications.waset.org/abstracts/190551/the-impact-of-recurring-events-in-fake-news-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190551.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">22</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4819</span> Musical Instruments Classification Using Machine Learning Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhalke%20D.%20G.">Bhalke D. G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Bormane%20D.%20S."> Bormane D. S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Kharate%20G.%20K."> Kharate G. K.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents classification of musical instrument using machine learning techniques. The classification has been carried out using temporal, spectral, cepstral and wavelet features. Detail feature analysis is carried out using separate and combined features. Further, instrument model has been developed using K-Nearest Neighbor and Support Vector Machine (SVM). Benchmarked McGill university database has been used to test the performance of the system. Experimental result shows that SVM performs better as compared to KNN classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=KNN" title=" KNN"> KNN</a>, <a href="https://publications.waset.org/abstracts/search?q=musical%20instruments" title=" musical instruments"> musical instruments</a> </p> <a href="https://publications.waset.org/abstracts/23369/musical-instruments-classification-using-machine-learning-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23369.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">480</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4818</span> Spatial Patterns and Temporal Evolution of Octopus Abundance in the Mauritanian Zone</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dedah%20Ahmed%20Babou">Dedah Ahmed Babou</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicolas%20Bez"> Nicolas Bez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Min-Max autocorrelation factor (MAF) approach makes it possible to express in a space formed by spatially independent factors, spatiotemporal observations. These factors are ordered in decreasing order of spatial autocorrelation. The starting observations are thus expressed in the space formed by these factors according to temporal coordinates. Each vector of temporal coefficients expresses the temporal evolution of the weight of the corresponding factor. Applying this approach has enabled us to achieve the following results: (i) Define a spatially orthogonal space in which the projections of the raw data are determined; (ii) Define a limit threshold for the factors with the strongest structures in order to analyze the weight, and the temporal evolution of these different structures (iii) Study the correlation between the temporal evolution of the persistent spatial structures and that of the observed average abundance (iv) Propose prototypes of campaigns reflecting a high vs. low abundance (v) Propose a classification of campaigns that highlights seasonal and/or temporal similarities. These results were obtained by analyzing the octopus yield during the scientific campaigns of the oceanographic vessel Al Awam during the period 1989-2017 in the Mauritanian exclusive economic zone. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spatiotemporal" title="spatiotemporal ">spatiotemporal </a>, <a href="https://publications.waset.org/abstracts/search?q=autocorrelation" title=" autocorrelation"> autocorrelation</a>, <a href="https://publications.waset.org/abstracts/search?q=kriging" title=" kriging"> kriging</a>, <a href="https://publications.waset.org/abstracts/search?q=variogram" title=" variogram"> variogram</a>, <a href="https://publications.waset.org/abstracts/search?q=Octopus%20vulgaris" title=" Octopus vulgaris"> Octopus vulgaris</a> </p> <a href="https://publications.waset.org/abstracts/134284/spatial-patterns-and-temporal-evolution-of-octopus-abundance-in-the-mauritanian-zone" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134284.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4817</span> Dynamic Background Updating for Lightweight Moving Object Detection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kelemewerk%20Destalem">Kelemewerk Destalem</a>, <a href="https://publications.waset.org/abstracts/search?q=Joongjae%20Cho"> Joongjae Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaeseong%20Lee"> Jaeseong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonhyuk%20Yoo"> Joonhyuk Yoo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of a histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title="background subtraction">background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20updating" title=" background updating"> background updating</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20time" title=" real time"> real time</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20weight%20algorithm" title=" light weight algorithm"> light weight algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20difference" title=" temporal difference"> temporal difference</a> </p> <a href="https://publications.waset.org/abstracts/31063/dynamic-background-updating-for-lightweight-moving-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31063.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4816</span> High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anthony%20D.%20Rhodes">Anthony D. Rhodes</a>, <a href="https://publications.waset.org/abstracts/search?q=Manan%20Goel"> Manan Goel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20segmentation" title=" object segmentation"> object segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20segmentation" title=" interactive segmentation"> interactive segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20compression" title=" model compression"> model compression</a> </p> <a href="https://publications.waset.org/abstracts/122051/high-fidelity-interactive-video-segmentation-using-tensor-decomposition-boundary-loss-convolutional-tessellations-and-context-aware-skip-connections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">120</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4815</span> Spatial Scale of Clustering of Residential Burglary and Its Dependence on Temporal Scale</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20A.%20Alazawi">Mohammed A. Alazawi</a>, <a href="https://publications.waset.org/abstracts/search?q=Shiguo%20Jiang"> Shiguo Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Steven%20F.%20Messner"> Steven F. Messner</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Research has long focused on two main spatial aspects of crime: spatial patterns and spatial processes. When analyzing these patterns and processes, a key issue has been to determine the proper spatial scale. In addition, it is important to consider the possibility that these patterns and processes might differ appreciably for different temporal scales and might vary across geographic units of analysis. We examine the spatial-temporal dependence of residential burglary. This dependence is tested at varying geographical scales and temporal aggregations. The analyses are based on recorded incidents of crime in Columbus, Ohio during the 1994-2002 period. We implement point pattern analysis on the crime points using Ripley’s K function. The results indicate that spatial point patterns of residential burglary reveal spatial scales of clustering relatively larger than the average size of census tracts of the study area. Also, spatial scale is independent of temporal scale. The results of our analyses concerning the geographic scale of spatial patterns and processes can inform the development of effective policies for crime control. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inhomogeneous%20K%20function" title="inhomogeneous K function">inhomogeneous K function</a>, <a href="https://publications.waset.org/abstracts/search?q=residential%20burglary" title=" residential burglary"> residential burglary</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20point%20pattern" title=" spatial point pattern"> spatial point pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20scale" title=" spatial scale"> spatial scale</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20scale" title=" temporal scale"> temporal scale</a> </p> <a href="https://publications.waset.org/abstracts/92371/spatial-scale-of-clustering-of-residential-burglary-and-its-dependence-on-temporal-scale" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92371.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4814</span> Temporal Axis in Japanese: The Paradox of a Metaphorical Orientation in Time</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tomoko%20Usui">Tomoko Usui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of linguistics, it has been said that concepts associated with space and motion systematically contribute structure to the temporal concept. This is the conceptual metaphor theory. conceptual metaphors typically employ a more abstract concept (time) as their target and a more concrete or physical concept as their source (space). This paper will examine two major temporal conceptual metaphors: Ego-centered Moving Time Metaphor and Time-RP Metaphor. Moving time generally receives a front-back orientation, however, Japanese shows a different orientation given to time. By means of Ego perspective, this paper will illustrate the paradox of a metaphorical orientation in time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ego-centered%20Moving%20Time%20Metaphor" title="Ego-centered Moving Time Metaphor">Ego-centered Moving Time Metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=Japanese%20saki" title=" Japanese saki"> Japanese saki</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20metaphors" title=" temporal metaphors"> temporal metaphors</a>, <a href="https://publications.waset.org/abstracts/search?q=Time%20RP%20Metaphor" title=" Time RP Metaphor"> Time RP Metaphor</a> </p> <a href="https://publications.waset.org/abstracts/40111/temporal-axis-in-japanese-the-paradox-of-a-metaphorical-orientation-in-time" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40111.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">496</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4813</span> Spatio-Temporal Analysis and Mapping of Malaria in Thailand</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Krisada%20Lekdee">Krisada Lekdee</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunee%20Sammatat"> Sunee Sammatat</a>, <a href="https://publications.waset.org/abstracts/search?q=Nittaya%20Boonsit"> Nittaya Boonsit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20method" title="Bayesian method">Bayesian method</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20linear%20mixed%20model%20%28GLMM%29" title=" generalized linear mixed model (GLMM)"> generalized linear mixed model (GLMM)</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria" title=" malaria"> malaria</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20effects" title=" spatial effects"> spatial effects</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20correlation" title=" temporal correlation"> temporal correlation</a> </p> <a href="https://publications.waset.org/abstracts/10300/spatio-temporal-analysis-and-mapping-of-malaria-in-thailand" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10300.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">454</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4812</span> A Recognition Method for Spatio-Temporal Background in Korean Historical Novels </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seo-Hee%20Kim">Seo-Hee Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Kee-Won%20Kim"> Kee-Won Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Seung-Hoon%20Kim"> Seung-Hoon Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important elements of a novel are the characters, events and background. The background represents the time, place and situation that character appears, and conveys event and atmosphere more realistically. If readers have the proper knowledge about background of novels, it may be helpful for understanding the atmosphere of a novel and choosing a novel that readers want to read. In this paper, we are targeting Korean historical novels because spatio-temporal background especially performs an important role in historical novels among the genre of Korean novels. To the best of our knowledge, we could not find previous study that was aimed at Korean novels. In this paper, we build a Korean historical national dictionary. Our dictionary has historical places and temple names of kings over many generations as well as currently existing spatial words or temporal words in Korean history. We also present a method for recognizing spatio-temporal background based on patterns of phrasal words in Korean sentences. Our rules utilize postposition for spatial background recognition and temple names for temporal background recognition. The knowledge of the recognized background can help readers to understand the flow of events and atmosphere, and can use to visualize the elements of novels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=Korean%20historical%20novels" title=" Korean historical novels"> Korean historical novels</a>, <a href="https://publications.waset.org/abstracts/search?q=Korean%20linguistic%20feature" title=" Korean linguistic feature"> Korean linguistic feature</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20background" title=" spatio-temporal background"> spatio-temporal background</a> </p> <a href="https://publications.waset.org/abstracts/47144/a-recognition-method-for-spatio-temporal-background-in-korean-historical-novels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47144.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">277</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4811</span> Variation of Phytoplankton Biomass in the East China Sea Based on MODIS Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yumei%20Wu">Yumei Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoyan%20Dang"> Xiaoyan Dang</a>, <a href="https://publications.waset.org/abstracts/search?q=Shenglong%20Yang"> Shenglong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Shengmao%20Zhang"> Shengmao Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The East China Sea is one of four main seas in China, where there are many fishery resources. Some important fishing grounds, such as Zhousan fishing ground important to society. But the eco-environment is destroyed seriously due to the rapid developing of industry and economy these years. In this paper, about twenty-year satellite data from MODIS and the statistical information of marine environment from the China marine environmental quality bulletin were applied to do the research. The chlorophyll-a concentration data from MODIS were dealt with in the East China Sea and then used to analyze the features and variations of plankton biomass in recent years. The statistics method was used to obtain their spatial and temporal features. The plankton biomass in the Yangtze River estuary and the Taizhou region were highest. The high phytoplankton biomass usually appeared between the 88th day to the 240th day (end-March - August). In the peak time of phytoplankton blooms, the Taizhou islands was the earliest, and the South China Sea was the latest. The intensity and period of phytoplankton blooms were connected with the global climate change. This work give us confidence to use satellite data to do more researches about the China Sea, and it also provides some help for us to know about the eco-environmental variation of the East China Sea and regional effect from global climate change. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20East%20China%20Sea" title="the East China Sea">the East China Sea</a>, <a href="https://publications.waset.org/abstracts/search?q=phytoplankton%20biomass" title=" phytoplankton biomass"> phytoplankton biomass</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20and%20spatial%20variation" title=" temporal and spatial variation"> temporal and spatial variation</a>, <a href="https://publications.waset.org/abstracts/search?q=phytoplankton%20bloom" title=" phytoplankton bloom"> phytoplankton bloom</a> </p> <a href="https://publications.waset.org/abstracts/63969/variation-of-phytoplankton-biomass-in-the-east-china-sea-based-on-modis-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4810</span> Language Processing of Seniors with Alzheimer’s Disease: From the Perspective of Temporal Parameters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lai%20Yi-Hsiu">Lai Yi-Hsiu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present paper aims to examine the language processing of Chinese-speaking seniors with Alzheimer’s disease (AD) from the perspective of temporal cues. Twenty healthy adults, 17 healthy seniors, and 13 seniors with AD in Taiwan participated in this study to tell stories based on two sets of pictures. Nine temporal cues were fetched and analyzed. Oral productions in Mandarin Chinese were compared and discussed to examine to what extent and in what way these three groups of participants performed with significant differences. Results indicated that the age effects were significant in filled pauses. The dementia effects were significant in mean duration of pauses, empty pauses, filled pauses, lexical pauses, normalized mean duration of filled pauses and lexical pauses. The findings reported in the current paper help characterize the nature of language processing in seniors with or without AD, and contribute to the interactions between the AD neural mechanism and their temporal parameters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=language%20processing" title="language processing">language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=Alzheimer%E2%80%99s%20disease" title=" Alzheimer’s disease"> Alzheimer’s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=Mandarin%20Chinese" title=" Mandarin Chinese"> Mandarin Chinese</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20cues" title=" temporal cues"> temporal cues</a> </p> <a href="https://publications.waset.org/abstracts/62548/language-processing-of-seniors-with-alzheimers-disease-from-the-perspective-of-temporal-parameters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4809</span> Dual-Network Memory Model for Temporal Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Motonobu%20Hattori">Motonobu Hattori</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In neural networks, when new patters are learned by a network, they radically interfere with previously stored patterns. This drawback is called catastrophic forgetting. We have already proposed a biologically inspired dual-network memory model which can much reduce this forgetting for static patterns. In this model, information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network using pseudo patterns. Because, temporal sequence learning is more important than static pattern learning in the real world, in this study, we improve our conventional dual-network memory model so that it can deal with temporal sequences without catastrophic forgetting. The computer simulation results show the effectiveness of the proposed dual-network memory model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=catastrophic%20forgetting" title="catastrophic forgetting">catastrophic forgetting</a>, <a href="https://publications.waset.org/abstracts/search?q=dual-network" title=" dual-network"> dual-network</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20sequences" title=" temporal sequences"> temporal sequences</a>, <a href="https://publications.waset.org/abstracts/search?q=hippocampal" title=" hippocampal "> hippocampal </a> </p> <a href="https://publications.waset.org/abstracts/2908/dual-network-memory-model-for-temporal-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4808</span> Ontology-Based Approach for Temporal Semantic Modeling of Social Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sou%C3%A2ad%20Boudebza">Souâad Boudebza</a>, <a href="https://publications.waset.org/abstracts/search?q=Omar%20Nouali"> Omar Nouali</a>, <a href="https://publications.waset.org/abstracts/search?q=Fai%C3%A7al%20Azouaou"> Faiçal Azouaou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Social networks have recently gained a growing interest on the web. Traditional formalisms for representing social networks are static and suffer from the lack of semantics. In this paper, we will show how semantic web technologies can be used to model social data. The SemTemp ontology aligns and extends existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to provide a temporal and semantically rich description of social data. We also present a modeling scenario to illustrate how our ontology can be used to model social networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ontology" title="ontology">ontology</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20web" title=" semantic web"> semantic web</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20network" title=" social network"> social network</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20modeling" title=" temporal modeling"> temporal modeling</a> </p> <a href="https://publications.waset.org/abstracts/42125/ontology-based-approach-for-temporal-semantic-modeling-of-social-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42125.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4807</span> Assessing Functional Structure in European Marine Ecosystems Using a Vector-Autoregressive Spatio-Temporal Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katyana%20A.%20Vert-Pre">Katyana A. Vert-Pre</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20T.%20Thorson"> James T. Thorson</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Trancart"> Thomas Trancart</a>, <a href="https://publications.waset.org/abstracts/search?q=Eric%20Feunteun"> Eric Feunteun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In marine ecosystems, spatial and temporal species structure is an important component of ecosystems’ response to anthropological and environmental factors. Although spatial distribution patterns and fish temporal series of abundance have been studied in the past, little research has been allocated to the joint dynamic spatio-temporal functional patterns in marine ecosystems and their use in multispecies management and conservation. Each species represents a function to the ecosystem, and the distribution of these species might not be random. A heterogeneous functional distribution will lead to a more resilient ecosystem to external factors. Applying a Vector-Autoregressive Spatio-Temporal (VAST) model for count data, we estimate the spatio-temporal distribution, shift in time, and abundance of 140 species of the Eastern English Chanel, Bay of Biscay and Mediterranean Sea. From the model outputs, we determined spatio-temporal clusters, calculating p-values for hierarchical clustering via multiscale bootstrap resampling. Then, we designed a functional map given the defined cluster. We found that the species distribution within the ecosystem was not random. Indeed, species evolved in space and time in clusters. Moreover, these clusters remained similar over time deriving from the fact that species of a same cluster often shifted in sync, keeping the overall structure of the ecosystem similar overtime. Knowing the co-existing species within these clusters could help with predicting data-poor species distribution and abundance. Further analysis is being performed to assess the ecological functions represented in each cluster. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cluster%20distribution%20shift" title="cluster distribution shift">cluster distribution shift</a>, <a href="https://publications.waset.org/abstracts/search?q=European%20marine%20ecosystems" title=" European marine ecosystems"> European marine ecosystems</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20distribution" title=" functional distribution"> functional distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20model" title=" spatio-temporal model"> spatio-temporal model</a> </p> <a href="https://publications.waset.org/abstracts/87029/assessing-functional-structure-in-european-marine-ecosystems-using-a-vector-autoregressive-spatio-temporal-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87029.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=161">161</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=162">162</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20features&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>