CINXE.COM

Search results for: spectrogram

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: spectrogram</title> <meta name="description" content="Search results for: spectrogram"> <meta name="keywords" content="spectrogram"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="spectrogram" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="spectrogram"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 25</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: spectrogram</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> 1D Convolutional Networks to Compute Mel-Spectrogram, Chromagram, and Cochleogram for Audio Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elias%20Nemer">Elias Nemer</a>, <a href="https://publications.waset.org/abstracts/search?q=Greg%20Vines"> Greg Vines</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Time-frequency transformation and spectral representations of audio signals are commonly used in various machine learning applications. Training networks on frequency features such as the Mel-Spectrogram or Cochleogram have been proven more effective and convenient than training on-time samples. In practical realizations, these features are created on a different processor and/or pre-computed and stored on disk, requiring additional efforts and making it difficult to experiment with different features. In this paper, we provide a PyTorch framework for creating various spectral features as well as time-frequency transformation and time-domain filter-banks using the built-in trainable conv1d() layer. This allows computing these features on the fly as part of a larger network and enabling easier experimentation with various combinations and parameters. Our work extends the work in the literature developed for that end: First, by adding more of these features and also by allowing the possibility of either starting from initialized kernels or training them from random values. The code is written as a template of classes and scripts that users may integrate into their own PyTorch classes or simply use as is and add more layers for various applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks%20Mel-Spectrogram" title="neural networks Mel-Spectrogram">neural networks Mel-Spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=chromagram" title=" chromagram"> chromagram</a>, <a href="https://publications.waset.org/abstracts/search?q=cochleogram" title=" cochleogram"> cochleogram</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20Fourrier%20transform" title=" discrete Fourrier transform"> discrete Fourrier transform</a>, <a href="https://publications.waset.org/abstracts/search?q=PyTorch%20conv1d%28%29" title=" PyTorch conv1d()"> PyTorch conv1d()</a> </p> <a href="https://publications.waset.org/abstracts/133529/1d-convolutional-networks-to-compute-mel-spectrogram-chromagram-and-cochleogram-for-audio-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133529.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">233</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Slice Bispectrogram Analysis-Based Classification of Environmental Sounds Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsumi%20Hirata">Katsumi Hirata</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Certain systems can function well only if they recognize the sound environment as humans do. In this research, we focus on sound classification by adopting a convolutional neural network and aim to develop a method that automatically classifies various environmental sounds. Although the neural network is a powerful technique, the performance depends on the type of input data. Therefore, we propose an approach via a slice bispectrogram, which is a third-order spectrogram and is a slice version of the amplitude for the short-time bispectrum. This paper explains the slice bispectrogram and discusses the effectiveness of the derived method by evaluating the experimental results using the ESC‑50 sound dataset. As a result, the proposed scheme gives high accuracy and stability. Furthermore, some relationship between the accuracy and non-Gaussianity of sound signals was confirmed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=environmental%20sound" title="environmental sound">environmental sound</a>, <a href="https://publications.waset.org/abstracts/search?q=bispectrum" title=" bispectrum"> bispectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=slice%20bispectrogram" title=" slice bispectrogram"> slice bispectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/114107/slice-bispectrogram-analysis-based-classification-of-environmental-sounds-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Musical Instrument Recognition in Polyphonic Audio Through Convolutional Neural Networks and Spectrograms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rujia%20Chen">Rujia Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Akbar%20Ghobakhlou"> Akbar Ghobakhlou</a>, <a href="https://publications.waset.org/abstracts/search?q=Ajit%20Narayanan"> Ajit Narayanan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the task of identifying musical instruments in polyphonic compositions using Convolutional Neural Networks (CNNs) from spectrogram inputs, focusing on binary classification. The model showed promising results, with an accuracy of 97% on solo instrument recognition. When applied to polyphonic combinations of 1 to 10 instruments, the overall accuracy was 64%, reflecting the increasing challenge with larger ensembles. These findings contribute to the field of Music Information Retrieval (MIR) by highlighting the potential and limitations of current approaches in handling complex musical arrangements. Future work aims to include a broader range of musical sounds, including electronic and synthetic sounds, to improve the model's robustness and applicability in real-time MIR systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binary%20classifier" title="binary classifier">binary classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=instrument" title=" instrument"> instrument</a> </p> <a href="https://publications.waset.org/abstracts/185822/musical-instrument-recognition-in-polyphonic-audio-through-convolutional-neural-networks-and-spectrograms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Classification of Coughing and Breathing Activities Using Wearable and a Light-Weight DL Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subham%20Ghosh">Subham Ghosh</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnab%20Nandi"> Arnab Nandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: The proliferation of Wireless Body Area Networks (WBAN) and Internet of Things (IoT) applications demonstrates the potential for continuous monitoring of physical changes in the body. These technologies are vital for health monitoring tasks, such as identifying coughing and breathing activities, which are necessary for disease diagnosis and management. Monitoring activities such as coughing and deep breathing can provide valuable insights into a variety of medical issues. Wearable radio-based antenna sensors, which are lightweight and easy to incorporate into clothing or portable goods, provide continuous monitoring. This mobility gives it a substantial advantage over stationary environmental sensors like as cameras and radar, which are constrained to certain places. Furthermore, using compressive techniques provides benefits such as reduced data transmission speeds and memory needs. These wearable sensors offer more advanced and diverse health monitoring capabilities. Methodology: This study analyzes the feasibility of using a semi-flexible antenna operating at 2.4 GHz (ISM band) and positioned around the neck and near the mouth to identify three activities: coughing, deep breathing, and idleness. Vector network analyzer (VNA) is used to collect time-varying complex reflection coefficient data from perturbed antenna nearfield. The reflection coefficient (S11) conveys nuanced information caused by simultaneous variations in the nearfield radiation of three activities across time. The signatures are sparsely represented with gaussian windowed Gabor spectrograms. The Gabor spectrogram is used as a sparse representation approach, which reassigns the ridges of the spectrogram images to improve their resolution and focus on essential components. The antenna is biocompatible in terms of specific absorption rate (SAR). The sparsely represented Gabor spectrogram pictures are fed into a lightweight deep learning (DL) model for feature extraction and classification. Two antenna locations are investigated in order to determine the most effective localization for three different activities. Findings: Cross-validation techniques were used on data from both locations. Due to the complex form of the recorded S11, separate analyzes and assessments were performed on the magnitude, phase, and their combination. The combination of magnitude and phase fared better than the separate analyses. Various sliding window sizes, ranging from 1 to 5 seconds, were tested to find the best window for activity classification. It was discovered that a neck-mounted design was effective at detecting the three unique behaviors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title="activity recognition">activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=antenna" title=" antenna"> antenna</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=time-frequency" title=" time-frequency"> time-frequency</a> </p> <a href="https://publications.waset.org/abstracts/194633/classification-of-coughing-and-breathing-activities-using-wearable-and-a-light-weight-dl-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">9</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Experimental Research and Analyses of Yoruba Native Speakers’ Chinese Phonetic Errors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Obasa%20Joshua%20Ifeoluwa">Obasa Joshua Ifeoluwa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Phonetics is the foundation and most important part of language learning. This article, through an acoustic experiment as well as using Praat software, uses Yoruba students’ Chinese consonants, vowels, and tones pronunciation to carry out a visual comparison with that of native Chinese speakers. This article is aimed at Yoruba native speakers learning Chinese phonetics; therefore, Yoruba students are selected. The students surveyed are required to be at an elementary level and have learned Chinese for less than six months. The students selected are all undergraduates majoring in Chinese Studies at the University of Lagos. These students have already learned Chinese Pinyin and are all familiar with the pinyin used in the provided questionnaire. The Chinese students selected are those that have passed the level two Mandarin proficiency examination, which serves as an assurance that their pronunciation is standard. It is discovered in this work that in terms of Mandarin’s consonants pronunciation, Yoruba students cannot distinguish between the voiced and voiceless as well as the aspirated and non-aspirated phonetics features. For instance, while pronouncing [ph] it is clearly shown in the spectrogram that the Voice Onset Time (VOT) of a Chinese speaker is higher than that of a Yoruba native speaker, which means that the Yoruba speaker is pronouncing the unaspirated counterpart [p]. Another difficulty is to pronounce some affricates like [tʂ]、[tʂʰ]、[ʂ]、[ʐ]、 [tɕ]、[tɕʰ]、[ɕ]. This is because these sounds are not in the phonetic system of the Yoruba language. In terms of vowels, some students find it difficult to pronounce some allophonic high vowels such as [ɿ] and [ʅ], therefore pronouncing them as their phoneme [i]; another pronunciation error is pronouncing [y] as [u], also as shown in the spectrogram, a student pronounced [y] as [iu]. In terms of tone, it is most difficult for students to differentiate between the second (rising) and third (falling and rising) tones because these tones’ emphasis is on the rising pitch. This work concludes that the major error made by Yoruba students while pronouncing Chinese sounds is caused by the interference of their first language (LI) and sometimes by their lingua franca. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chinese" title="Chinese">Chinese</a>, <a href="https://publications.waset.org/abstracts/search?q=Yoruba" title=" Yoruba"> Yoruba</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20analysis" title=" error analysis"> error analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=experimental%20phonetics" title=" experimental phonetics"> experimental phonetics</a>, <a href="https://publications.waset.org/abstracts/search?q=consonant" title=" consonant"> consonant</a>, <a href="https://publications.waset.org/abstracts/search?q=vowel" title=" vowel"> vowel</a>, <a href="https://publications.waset.org/abstracts/search?q=tone" title=" tone"> tone</a> </p> <a href="https://publications.waset.org/abstracts/148984/experimental-research-and-analyses-of-yoruba-native-speakers-chinese-phonetic-errors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148984.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">111</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Fuad">N. Fuad</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20N.%20Taib"> M. N. Taib</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Jailani"> R. Jailani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20E.%20Marwan"> M. E. Marwan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=power%20spectral%20density" title="power spectral density">power spectral density</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20EEG%20model" title=" 3D EEG model"> 3D EEG model</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20balancing" title=" brain balancing"> brain balancing</a>, <a href="https://publications.waset.org/abstracts/search?q=kNN" title=" kNN"> kNN</a> </p> <a href="https://publications.waset.org/abstracts/11285/brainwave-classification-for-brain-balancing-index-bbi-via-3d-eeg-model-using-k-nn-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11285.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">486</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Italian Speech Vowels Landmark Detection through the Legacy Tool &#039;xkl&#039; with Integration of Combined CNNs and RNNs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaleem%20Kashif">Kaleem Kashif</a>, <a href="https://publications.waset.org/abstracts/search?q=Tayyaba%20Anam"> Tayyaba Anam</a>, <a href="https://publications.waset.org/abstracts/search?q=Yizhi%20Wu"> Yizhi Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=landmark%20detection" title="landmark detection">landmark detection</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20analysis" title=" acoustic analysis"> acoustic analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20network" title=" recurrent neural network"> recurrent neural network</a> </p> <a href="https://publications.waset.org/abstracts/184529/italian-speech-vowels-landmark-detection-through-the-legacy-tool-xkl-with-integration-of-combined-cnns-and-rnns" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184529.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20Alhamdi">Mustafa Alhamdi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=nuclear%20physics" title=" nuclear physics"> nuclear physics</a>, <a href="https://publications.waset.org/abstracts/search?q=Monte%20Carlo%20simulation" title=" Monte Carlo simulation"> Monte Carlo simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20estimation" title=" noise estimation"> noise estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/140878/spectrogram-pre-processing-to-improve-isotopic-identification-to-discriminate-gamma-and-neutrons-sources" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140878.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Characterization of 3D-MRP for Analyzing of Brain Balancing Index (BBI) Pattern</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Fuad">N. Fuad</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20N.%20Taib"> M. N. Taib</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Jailani"> R. Jailani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20E.%20Marwan"> M. E. Marwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses on power spectral density (PSD) characteristics which are extracted from three-dimensional (3D) electroencephalogram (EEG) models. The EEG signal recording was conducted on 150 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, the values of maximum PSD were extracted as features from the model. These features are analysed using mean relative power (MRP) and different mean relative power (DMRP) technique to observe the pattern among different brain balancing indexes. The results showed that by implementing these techniques, the pattern of brain balancing indexes can be clearly observed. Some patterns are indicates between index 1 to index 5 for left frontal (LF) and right frontal (RF). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=power%20spectral%20density" title="power spectral density">power spectral density</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20EEG%20model" title=" 3D EEG model"> 3D EEG model</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20balancing" title=" brain balancing"> brain balancing</a>, <a href="https://publications.waset.org/abstracts/search?q=mean%20relative%20power" title=" mean relative power"> mean relative power</a>, <a href="https://publications.waset.org/abstracts/search?q=different%20mean%20relative%20power" title=" different mean relative power"> different mean relative power</a> </p> <a href="https://publications.waset.org/abstracts/6107/characterization-of-3d-mrp-for-analyzing-of-brain-balancing-index-bbi-pattern" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Times2D: A Time-Frequency Method for Time Series Forecasting</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reza%20Nematirad">Reza Nematirad</a>, <a href="https://publications.waset.org/abstracts/search?q=Anil%20Pahwa"> Anil Pahwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Balasubramaniam%20Natarajan"> Balasubramaniam Natarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=derivative%20patterns" title="derivative patterns">derivative patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series%20forecasting" title=" time series forecasting"> time series forecasting</a>, <a href="https://publications.waset.org/abstracts/search?q=times2D" title=" times2D"> times2D</a>, <a href="https://publications.waset.org/abstracts/search?q=2D%20representation" title=" 2D representation"> 2D representation</a> </p> <a href="https://publications.waset.org/abstracts/186575/times2d-a-time-frequency-method-for-time-series-forecasting" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186575.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">42</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Adsorption of Methylene Blue by Pectin from Durian (Durio zibethinus) Seeds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siti%20Nurkhalimah">Siti Nurkhalimah</a>, <a href="https://publications.waset.org/abstracts/search?q=Devita%20Wijiyanti"> Devita Wijiyanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuntari"> Kuntari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Methylene blue is a popular water-soluble dye that is used for dyeing a variety of substrates such as bacteria, wool, and silk. Methylene blue discharged into the aquatic environment will cause health problems for living things. Treatment method for industrial wastewater may be divided into three main categories: physical, chemical, and biological. Among them, adsorption technology is generally considered to be an effective method for quickly lowering the concentration of dissolved dyes in a wastewater. This has attracted considerable research into low-cost alternative adsorbents for adsorbing or removing coloring matter. In this research, pectin from durian seeds was utilized here to assess their ability for the removal of methylene blue. Adsorption parameters are contact time and dye concentration were examined in the batch adsorption processes. Pectin characterization was performed by FTIR spectrometry. Methylene blue concentration was determined by using UV-Vis spectrophotometer. FTIR results show that the samples showed the typical fingerprint in IR spectrogram. The adsorption result on 10 mL of 5 mg/L methylene blue solution achieved 95.12% when contact time 10 minutes and pectin 0.2 g. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pectin" title="pectin">pectin</a>, <a href="https://publications.waset.org/abstracts/search?q=methylene%20blue" title=" methylene blue"> methylene blue</a>, <a href="https://publications.waset.org/abstracts/search?q=adsorption" title=" adsorption"> adsorption</a>, <a href="https://publications.waset.org/abstracts/search?q=durian%20seed" title=" durian seed"> durian seed</a> </p> <a href="https://publications.waset.org/abstracts/83104/adsorption-of-methylene-blue-by-pectin-from-durian-durio-zibethinus-seeds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83104.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Deep Learning Approaches for Accurate Detection of Epileptic Seizures from Electroencephalogram Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramzi%20Rihane">Ramzi Rihane</a>, <a href="https://publications.waset.org/abstracts/search?q=Yassine%20Benayed"> Yassine Benayed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures resulting from abnormal electrical activity in the brain. Timely and accurate detection of these seizures is essential for improving patient care. In this study, we leverage the UK Bonn University open-source EEG dataset and employ advanced deep-learning techniques to automate the detection of epileptic seizures. By extracting key features from both time and frequency domains, as well as Spectrogram features, we enhance the performance of various deep learning models. Our investigation includes architectures such as Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), 1D Convolutional Neural Networks (1D-CNN), and hybrid CNN-LSTM and CNN-BiLSTM models. The models achieved impressive accuracies: LSTM (98.52%), Bi-LSTM (98.61%), CNN-LSTM (98.91%), CNN-BiLSTM (98.83%), and CNN (98.73%). Additionally, we utilized a data augmentation technique called SMOTE, which yielded the following results: CNN (97.36%), LSTM (97.01%), Bi-LSTM (97.23%), CNN-LSTM (97.45%), and CNN-BiLSTM (97.34%). These findings demonstrate the effectiveness of deep learning in capturing complex patterns in EEG signals, providing a reliable and scalable solution for real-time seizure detection in clinical environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electroencephalogram" title="electroencephalogram">electroencephalogram</a>, <a href="https://publications.waset.org/abstracts/search?q=epileptic%20seizure" title=" epileptic seizure"> epileptic seizure</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=BI-LSTM" title=" BI-LSTM"> BI-LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=seizure%20detection" title=" seizure detection"> seizure detection</a> </p> <a href="https://publications.waset.org/abstracts/193110/deep-learning-approaches-for-accurate-detection-of-epileptic-seizures-from-electroencephalogram-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193110.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Detection of Atrial Fibrillation Using Wearables via Attentional Two-Stream Heterogeneous Networks </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huawei%20Bai">Huawei Bai</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianguo%20Yao"> Jianguo Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Fellow"> Fellow</a>, <a href="https://publications.waset.org/abstracts/search?q=IEEE"> IEEE</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Atrial fibrillation (AF) is the most common form of heart arrhythmia and is closely associated with mortality and morbidity in heart failure, stroke, and coronary artery disease. The development of single spot optical sensors enables widespread photoplethysmography (PPG) screening, especially for AF, since it represents a more convenient and noninvasive approach. To our knowledge, most existing studies based on public and unbalanced datasets can barely handle the multiple noises sources in the real world and, also, lack interpretability. In this paper, we construct a large- scale PPG dataset using measurements collected from PPG wrist- watch devices worn by volunteers and propose an attention-based two-stream heterogeneous neural network (TSHNN). The first stream is a hybrid neural network consisting of a three-layer one-dimensional convolutional neural network (1D-CNN) and two-layer attention- based bidirectional long short-term memory (Bi-LSTM) network to learn representations from temporally sampled signals. The second stream extracts latent representations from the PPG time-frequency spectrogram using a five-layer CNN. The outputs from both streams are fed into a fusion layer for the outcome. Visualization of the attention weights learned demonstrates the effectiveness of the attention mechanism against noise. The experimental results show that the TSHNN outperforms all the competitive baseline approaches and with 98.09% accuracy, achieves state-of-the-art performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PPG%20wearables" title="PPG wearables">PPG wearables</a>, <a href="https://publications.waset.org/abstracts/search?q=atrial%20fibrillation" title=" atrial fibrillation"> atrial fibrillation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title=" feature fusion"> feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=hyber%20network" title=" hyber network"> hyber network</a> </p> <a href="https://publications.waset.org/abstracts/113139/detection-of-atrial-fibrillation-using-wearables-via-attentional-two-stream-heterogeneous-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/113139.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Accentuation Moods of Blaming Utterances in Egyptian Arabic: A Pragmatic Study of Prosodic Focus</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reda%20A.%20H.%20Mahmoud">Reda A. H. Mahmoud</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the pragmatic meaning of prosodic focus through four accentuation moods of blaming utterances in Egyptian Arabic. Prosodic focus results in various pragmatic meanings when the speaker utters the same blaming expression in different emotional moods: the angry, the mocking, the frustrated, and the informative moods. The main objective of this study is to interpret the meanings of these four accentuation moods in relation to their illocutionary forces and pre-locutionary effects, the integrated features of prosodic focus (e.g., tone movement distributions, pitch accents, lengthening of vowels, deaccentuation of certain syllables/words, and tempo), and the consonance between the former prosodic features and certain lexico-grammatical components to communicate the intentions of the speaker. The data on blaming utterances has been collected via elicitation and pre-recorded material, and the selection of blaming utterances is based on the criteria of lexical and prosodic regularity to be processed and verified by three computer programs, Praat, Speech Analyzer, and Spectrogram Freeware. A dual pragmatic approach is established to interpret expressive blaming utterance and their lexico-grammatical distributions into intonational focus structure units. The pragmatic component of this approach explains the variable psychological attitudes through the expressions of blaming and their effects whereas the analysis of prosodic focus structure is used to describe the intonational contours of blaming utterances and other prosodic features. The study concludes that every accentuation mood has its different prosodic configuration which influences the listener’s interpretation of the pragmatic meanings of blaming utterances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pragmatics" title="pragmatics">pragmatics</a>, <a href="https://publications.waset.org/abstracts/search?q=pragmatic%20interpretation" title=" pragmatic interpretation"> pragmatic interpretation</a>, <a href="https://publications.waset.org/abstracts/search?q=prosody" title=" prosody"> prosody</a>, <a href="https://publications.waset.org/abstracts/search?q=prosodic%20focus" title=" prosodic focus"> prosodic focus</a> </p> <a href="https://publications.waset.org/abstracts/87935/accentuation-moods-of-blaming-utterances-in-egyptian-arabic-a-pragmatic-study-of-prosodic-focus" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87935.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> UEMG-FHR Coupling Analysis in Pregnancies Complicated by Pre-Eclampsia and Small for Gestational Age</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kun%20Chen">Kun Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yangyu%20Zhao"> Yangyu Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Shufang%20Li"> Shufang Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Lian%20Chen"> Lian Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoyue%20Guo"> Xiaoyue Guo</a>, <a href="https://publications.waset.org/abstracts/search?q=Jue%20Zhang"> Jue Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jing%20Fang"> Jing Fang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The coupling strength between uterine electromyography (UEMG) and Fetal heart rate (FHR) signals during peripartum reflects the fetal biophysical activities. Therefore, UEMG-FHR coupling characterization is instructive in assessing placenta function. This study introduced a physiological marker named elevated frequency of UEMG-FHR coupling (E-UFC) and explored its predictive value for pregnancies complicated by pre-eclampsia and small for gestational age (SGA). Placental insufficiency patients (n=12) and healthy volunteers (n=24) were recruited and participated. UEMG and FHR were recorded non-invasively by a trans-abdominal device in women at term with singleton pregnancy (32-37 weeks) from 10:00 pm to 8:00 am. The product of the wavelet coherence and the wavelet cross-spectral power between UEMG and FHR was used to weight these two effects in order to quantify the degree of the UEMG-FHR coupling. E-UFC was exacted from the resultant spectrogram by calculating the mean value of the high-coherence (r > 0.5) frequency band. Results showed the high-coherence between UEMG and FHR was observed in the frequency band (1/512-1/16Hz). In addition, E-UFC in placental insufficiency patients was weaker compared to healthy controls (p < 0.001) at group level. These findings suggested the proposed approach could be used to quantitatively characterize the fetal biophysical activities, which is beneficial for early detection of placental insufficiency and reduces the occurrence of adverse pregnancy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=uterine%20electromyography" title="uterine electromyography">uterine electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=fetal%20heart%20rate" title=" fetal heart rate"> fetal heart rate</a>, <a href="https://publications.waset.org/abstracts/search?q=coupling%20analysis" title=" coupling analysis"> coupling analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20analysis" title=" wavelet analysis"> wavelet analysis</a> </p> <a href="https://publications.waset.org/abstracts/95342/uemg-fhr-coupling-analysis-in-pregnancies-complicated-by-pre-eclampsia-and-small-for-gestational-age" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">202</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Adhesive Based upon Polyvinyl Alcohol And Chemical Modified Oca (Oxalis tuberosa) Starch</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samantha%20Borja">Samantha Borja</a>, <a href="https://publications.waset.org/abstracts/search?q=Vladimir%20Valle"> Vladimir Valle</a>, <a href="https://publications.waset.org/abstracts/search?q=Pamela%20Molina"> Pamela Molina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of adhesives from renewable raw materials attracts the attention of the scientific community, due to it promises the reduction of the dependence with materials derived from oil. This work proposes the use of modified 'oca (Oxalis tuberosa)' starch and polyvinyl alcohol (PVA) in the elaboration of adhesives for lignocellulosic substrates. The investigation focused on the formulation of adhesives with 3 different PVA:starch (modified and native) ratios (of 1,0:0,33; 1,0:1,0; 1,0:1,67). The first step to perform it was the chemical modification of starch through acid hydrolysis and a subsequent urea treatment to get carbamate starch. Then, the adhesive obtained was characterized in terms of instantaneous viscosity, Fourier-transform infrared spectroscopy (FTIR) and shear strength. The results showed that viscosity and mechanical tests exhibit data with the same tendency in relation to the native and modified starch concentration. It was observed that the data started to reduce its values to a certain concentration, where the values began to grow. On the other hand, two relevant bands were found in the FTIR spectrogram. The first in 3300 cm⁻¹ of OH group with the same intensity for all the essays and the other one in 2900 cm⁻¹, belonging to the group of alkanes with a different intensity for each adhesive. On the whole, the ratio PVA:starch (1:1) will not favor crosslinking in the adhesive structure and causes the viscosity reduction, whereas, in the others ones, the viscosity is higher. It was also observed that adhesives made with modified starch had better characteristics, but the adhesives with high concentrations of native starch could equal the properties of the adhesives made with low concentrations of modified starch. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=polyvinyl%20alcohol" title="polyvinyl alcohol">polyvinyl alcohol</a>, <a href="https://publications.waset.org/abstracts/search?q=PVA" title=" PVA"> PVA</a>, <a href="https://publications.waset.org/abstracts/search?q=chemical%20modification" title=" chemical modification"> chemical modification</a>, <a href="https://publications.waset.org/abstracts/search?q=starch" title=" starch"> starch</a>, <a href="https://publications.waset.org/abstracts/search?q=FTIR" title=" FTIR"> FTIR</a>, <a href="https://publications.waset.org/abstracts/search?q=viscosity" title=" viscosity"> viscosity</a>, <a href="https://publications.waset.org/abstracts/search?q=shear%20strength" title=" shear strength"> shear strength</a> </p> <a href="https://publications.waset.org/abstracts/114442/adhesive-based-upon-polyvinyl-alcohol-and-chemical-modified-oca-oxalis-tuberosa-starch" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">154</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> The Relationship between Spindle Sound and Tool Performance in Turning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Seemuang">N. Seemuang</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20McLeay"> T. McLeay</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Slatter"> T. Slatter </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Worn tools have a direct effect on the surface finish and part accuracy. Tool condition monitoring systems have been developed over a long period and used to avoid a loss of productivity resulting from using a worn tool. However, the majority of tool monitoring research has applied expensive sensing systems not suitable for production. In this work, the cutting sound in turning machine was studied using microphone. Machining trials using seven cutting conditions were conducted until the observable flank wear width (FWW) on the main cutting edge exceeded 0.4 mm. The cutting inserts were removed from the tool holder and the flank wear width was measured optically. A microphone with built-in preamplifier was used to record the machining sound of EN24 steel being face turned by a CNC lathe in a wet cutting condition using constant surface speed control. The sound was sampled at 50 kS/s and all sound signals recorded from microphone were transformed into the frequency domain by FFT in order to establish the frequency content in the audio signature that could be then used for tool condition monitoring. The extracted feature from audio signal was compared to the flank wear progression on the cutting inserts. The spectrogram reveals a promising feature, named as ‘spindle noise’, which emits from the main spindle motor of turning machine. The spindle noise frequency was detected at 5.86 kHz of regardless of cutting conditions used on this particular CNC lathe. Varying cutting speed and feed rate have an influence on the magnitude of power spectrum of spindle noise. The magnitude of spindle noise frequency alters in conjunction with the tool wear progression. The magnitude increases significantly in the transition state between steady-state wear and severe wear. This could be used as a warning signal to prepare for tool replacement or adapt cutting parameters to extend tool life. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tool%20wear" title="tool wear">tool wear</a>, <a href="https://publications.waset.org/abstracts/search?q=flank%20wear" title=" flank wear"> flank wear</a>, <a href="https://publications.waset.org/abstracts/search?q=condition%20monitoring" title=" condition monitoring"> condition monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=spindle%20noise" title=" spindle noise"> spindle noise</a> </p> <a href="https://publications.waset.org/abstracts/32232/the-relationship-between-spindle-sound-and-tool-performance-in-turning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thiago%20Spilborghs%20Bueno%20Meyer">Thiago Spilborghs Bueno Meyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Plinio%20Thomaz%20Aquino%20Junior"> Plinio Thomaz Aquino Junior</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=human-robot%20interaction" title=" human-robot interaction"> human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/145908/speech-emotion-recognition-a-dnn-and-lstm-comparison-in-single-and-multiple-feature-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Analyzing the Sound of Space - The Glissando of the Planets and the Spiral Movement on the Sound of Earth, Saturn and Jupiter</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=L.%20Tonia">L. Tonia</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Daglis"> I. Daglis</a>, <a href="https://publications.waset.org/abstracts/search?q=W.%20Kurth"> W. Kurth</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The sound of the universe creates an affinity with the sounds of music. The analysis of the sound of space focuses on the existence of a tone material, the microstructure and macrostructure, and the form of the sound through the signals recorded during the flight of the spacecraft Van Allen Probes and Cassini’s mission. The sound becomes from the frequencies that belong to electromagnetic waves. Plasma Wave Science Instrument and Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) recorded the signals from space. A transformation of that signals to audio gave the opportunity to study and analyze the sound. Due to the fact that the musical tone pitch has a frequency and every electromagnetic wave produces a frequency too, the creation of a musical score, which appears as the sound of space, can give information about the form, the symmetry, and the harmony of the sound. The conversion of space radio emissions to audio provides a number of tone pitches corresponding to the original frequencies. Through the process of these sounds, we have the opportunity to present a music score that “composed” from space. In this score, we can see some basic features associated with the music form, the structure, the tone center of music material, the construction and deconstruction of the sound. The structure, which was built through a harmonic world, includes tone centers, major and minor scales, sequences of chords, and types of cadences. The form of the sound represents the symmetry of a spiral movement not only in micro-structural but also to macro-structural shape. Multiple glissando sounds in linear and polyphonic process of the sound, founded in magnetic fields around Earth, Saturn, and Jupiter, but also a spiral movement appeared on the spectrogram of the sound. Whistles, Auroral Kilometric Radiations, and Chorus emissions reveal movements similar to musical excerpts of works by contemporary composers like Sofia Gubaidulina, Iannis Xenakis, EinojuhamiRautavara. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space%20sound%20analysis" title="space sound analysis">space sound analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=spiral" title=" spiral"> spiral</a>, <a href="https://publications.waset.org/abstracts/search?q=space%20music" title=" space music"> space music</a>, <a href="https://publications.waset.org/abstracts/search?q=analysis" title=" analysis"> analysis</a> </p> <a href="https://publications.waset.org/abstracts/141526/analyzing-the-sound-of-space-the-glissando-of-the-planets-and-the-spiral-movement-on-the-sound-of-earth-saturn-and-jupiter" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141526.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Luis%20Alvarado">Luis Alvarado</a>, <a href="https://publications.waset.org/abstracts/search?q=Victor%20Poblete"> Victor Poblete</a>, <a href="https://publications.waset.org/abstracts/search?q=Isaac%20Gonzalez"> Isaac Gonzalez</a>, <a href="https://publications.waset.org/abstracts/search?q=Yetzabeth%20Gonzalez"> Yetzabeth Gonzalez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chord%20recognition" title="chord recognition">chord recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20information%20retrieval" title=" music information retrieval"> music information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/92608/robustness-of-the-deep-chroma-extractor-and-locally-normalized-quarter-tone-filters-in-automatic-chord-estimation-under-reverberant-conditions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92608.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Duration of Isolated Vowels in Infants with Cochlear Implants</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paris%20Binos">Paris Binos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present work investigates developmental aspects of the duration of isolated vowels in infants with normal hearing compared to those who received cochlear implants (CIs) before two years of age. Infants with normal hearing produced shorter vowel duration since this find related with more mature production abilities. First isolated vowels are transparent during the protophonic stage as evidence of an increased motor and linguistic control. Vowel duration is a crucial factor for the transition of prelexical speech to normal adult speech. Despite current knowledge of data for infants with normal hearing more research is needed to unravel productions skills in early implanted children. Thus, isolated vowel productions by two congenitally hearing-impaired Greek infants (implantation ages 1:4-1:11; post-implant ages 0:6-1:3) were recorded and sampled for six months after implantation with a Nucleus-24. The results compared with the productions of three normal hearing infants (chronological ages 0:8-1:1). Vegetative data and vocalizations masked by external noise or sounds were excluded. Participants had no other disabilities and had unknown deafness etiology. Prior to implantation the infants had an average unaided hearing loss of 95-110 dB HL while the post-implantation PTA decreased to 10-38 dB HL. The current research offers a methodology for the processing of the prelinguistic productions based on a combination of acoustical and auditory analyses. Based on the current methodological framework, duration measured through spectrograms based on wideband analysis, from the voicing onset to the end of the vowel. The end marked by two co-occurring events: 1) The onset of aperiodicity with a rapid change in amplitude in the waveform and 2) a loss in formant’s energy. Cut-off levels of significance were set at 0.05 for all tests. Bonferroni post hoc tests indicated that difference was significant between the mean duration of vowels of infants wearing CIs and their normal hearing peers. Thus, the mean vowel duration of CIs measured longer compared to the normal hearing peers (0.000). The current longitudinal findings contribute to the existing data for the performance of children wearing CIs at a very young age and enrich also the data of the Greek language. The above described weakness for CI’s performance is a challenge for future work in speech processing and CI’s processing strategies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cochlear%20implant" title="cochlear implant">cochlear implant</a>, <a href="https://publications.waset.org/abstracts/search?q=duration" title=" duration"> duration</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=vowel" title=" vowel"> vowel</a> </p> <a href="https://publications.waset.org/abstracts/64394/duration-of-isolated-vowels-in-infants-with-cochlear-implants" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">261</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun%20Won%20Kim">Jun Won Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quantitative%20electroencephalography%20%28QEEG%29" title="quantitative electroencephalography (QEEG)">quantitative electroencephalography (QEEG)</a>, <a href="https://publications.waset.org/abstracts/search?q=theta-phase%20gamma-amplitude%20coupling%20%28TGC%29" title=" theta-phase gamma-amplitude coupling (TGC)"> theta-phase gamma-amplitude coupling (TGC)</a>, <a href="https://publications.waset.org/abstracts/search?q=schizophrenia" title=" schizophrenia"> schizophrenia</a>, <a href="https://publications.waset.org/abstracts/search?q=diagnostic%20utility" title=" diagnostic utility"> diagnostic utility</a> </p> <a href="https://publications.waset.org/abstracts/82231/theta-phase-gamma-amplitude-coupling-as-a-neurophysiological-marker-in-neuroleptic-naive-schizophrenia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82231.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=James%20Rigor%20Camacho">James Rigor Camacho</a>, <a href="https://publications.waset.org/abstracts/search?q=Wansu%20Lim"> Wansu Lim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20AI%20device" title="edge AI device">edge AI device</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG" title=" EEG"> EEG</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition%20system" title=" emotion recognition system"> emotion recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20learning%20algorithm" title=" supervised learning algorithm"> supervised learning algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=sensors" title=" sensors"> sensors</a> </p> <a href="https://publications.waset.org/abstracts/147311/development-of-an-eeg-based-real-time-emotion-recognition-system-on-edge-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=George%20Zhou">George Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunchan%20Chen"> Yunchan Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Candace%20Chien"> Candace Chien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arteriovenous%20fistula" title="arteriovenous fistula">arteriovenous fistula</a>, <a href="https://publications.waset.org/abstracts/search?q=blood%20flow%20sounds" title=" blood flow sounds"> blood flow sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=metadata%20encoding" title=" metadata encoding"> metadata encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/163552/categorical-metadata-encoding-schemes-for-arteriovenous-fistula-blood-flow-sound-classification-scaling-numerical-representations-leads-to-improved-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Analysis of Vibration and Shock Levels during Transport and Handling of Bananas within the Post-Harvest Supply Chain in Australia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indika%20Fernando">Indika Fernando</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiangang%20Fei"> Jiangang Fei</a>, <a href="https://publications.waset.org/abstracts/search?q=Roger%20%20Stanley"> Roger Stanley</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Enshaei"> Hossein Enshaei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Delicate produce such as fresh fruits are increasingly susceptible to physiological damage during the essential post-harvest operations such as transport and handling. Vibration and shock during the distribution are identified factors for produce damage within post-harvest supply chains. Mechanical damages caused during transit may significantly diminish the quality of fresh produce which may also result in a substantial wastage. Bananas are one of the staple fruit crops and the most sold supermarket produce in Australia. It is also the largest horticultural industry in the state of Queensland where 95% of the total production of bananas are cultivated. This results in significantly lengthy interstate supply chains where fruits are exposed to prolonged vibration and shocks. This paper is focused on determining the shock and vibration levels experienced by packaged bananas during transit from the farm gate to the retail market. Tri-axis acceleration data were captured by custom made accelerometer based data loggers which were set to a predetermined sampling rate of 400 Hz. The devices recorded data continuously for 96 Hours in the interstate journey of nearly 3000 Km from the growing fields in far north Queensland to the central distribution centre in Melbourne in Victoria. After the bananas were ripened at the ripening facility in Melbourne, the data loggers were used to capture the transport and handling conditions from the central distribution centre to three retail outlets within the outskirts of Melbourne. The quality of bananas were assessed before and after transport at each location along the supply chain. Time series vibration and shock data were used to determine the frequency and the severity of the transient shocks experienced by the packages. Frequency spectrogram was generated to determine the dominant frequencies within each segment of the post-harvest supply chain. Root Mean Square (RMS) acceleration levels were calculated to characterise the vibration intensity during transport. Data were further analysed by Fast Fourier Transform (FFT) and the Power Spectral Density (PSD) profiles were generated to determine the critical frequency ranges. It revealed the frequency range in which the escalated energy levels were transferred to the packages. It was found that the vertical vibration was the highest and the acceleration levels mostly oscillated between ± 1g during transport. Several shock responses were recorded exceeding this range which were mostly attributed to package handling. These detrimental high impact shocks may eventually lead to mechanical damages in bananas such as impact bruising, compression bruising and neck injuries which affect their freshness and visual quality. It was revealed that the frequency range between 0-5 Hz and 15-20 Hz exert an escalated level of vibration energy to the packaged bananas which may result in abrasion damages such as scuffing, fruit rub and blackened rub. Further research is indicated specially in the identified critical frequency ranges to minimise exposure of fruits to the harmful effects of vibration. Improving the handling conditions and also further study on package failure mechanisms when exposed to transient shock excitation will be crucial to improve the visual quality of bananas within the post-harvest supply chain in Australia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bananas" title="bananas">bananas</a>, <a href="https://publications.waset.org/abstracts/search?q=handling" title=" handling"> handling</a>, <a href="https://publications.waset.org/abstracts/search?q=post-harvest" title=" post-harvest"> post-harvest</a>, <a href="https://publications.waset.org/abstracts/search?q=supply%20chain" title=" supply chain"> supply chain</a>, <a href="https://publications.waset.org/abstracts/search?q=shocks" title=" shocks"> shocks</a>, <a href="https://publications.waset.org/abstracts/search?q=transport" title=" transport"> transport</a>, <a href="https://publications.waset.org/abstracts/search?q=vibration" title=" vibration"> vibration</a> </p> <a href="https://publications.waset.org/abstracts/87293/analysis-of-vibration-and-shock-levels-during-transport-and-handling-of-bananas-within-the-post-harvest-supply-chain-in-australia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10