CINXE.COM
Search results for: closed-set tex-independent speaker identification system (CISI)
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: closed-set tex-independent speaker identification system (CISI)</title> <meta name="description" content="Search results for: closed-set tex-independent speaker identification system (CISI)"> <meta name="keywords" content="closed-set tex-independent speaker identification system (CISI)"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="closed-set tex-independent speaker identification system (CISI)" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="closed-set tex-independent speaker identification system (CISI)"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 19950</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: closed-set tex-independent speaker identification system (CISI)</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19950</span> An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ben%20Soltane%20Cheima">Ben Soltane Cheima</a>, <a href="https://publications.waset.org/abstracts/search?q=Ittansa%20Yonas%20Kelbesa"> Ittansa Yonas Kelbesa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20modeling" title=" speaker modeling"> speaker modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20matching" title=" feature matching"> feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel%20frequency%20cepstrum%20coefficient%20%28MFCC%29" title=" Mel frequency cepstrum coefficient (MFCC)"> Mel frequency cepstrum coefficient (MFCC)</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model%20%28GMM%29" title=" Gaussian mixture model (GMM)"> Gaussian mixture model (GMM)</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20quantization%20%28VQ%29" title=" vector quantization (VQ)"> vector quantization (VQ)</a>, <a href="https://publications.waset.org/abstracts/search?q=Linde-Buzo-Gray%20%28LBG%29" title=" Linde-Buzo-Gray (LBG)"> Linde-Buzo-Gray (LBG)</a>, <a href="https://publications.waset.org/abstracts/search?q=expectation%20maximization%20%28EM%29" title=" expectation maximization (EM)"> expectation maximization (EM)</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-processing" title=" pre-processing"> pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detection%20%28VAD%29" title=" voice activity detection (VAD)"> voice activity detection (VAD)</a>, <a href="https://publications.waset.org/abstracts/search?q=short%20time%20energy%20%28STE%29" title=" short time energy (STE)"> short time energy (STE)</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20noise%20statistical%20modeling" title=" background noise statistical modeling"> background noise statistical modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29" title=" closed-set tex-independent speaker identification system (CISI)"> closed-set tex-independent speaker identification system (CISI)</a> </p> <a href="https://publications.waset.org/abstracts/16253/an-intelligent-text-independent-speaker-identification-using-vq-gmm-model-based-multiple-classifier-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">309</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19949</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extreme%20learning" title="extreme learning">extreme learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a> </p> <a href="https://publications.waset.org/abstracts/112384/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19948</span> USE-Net: SE-Block Enhanced U-Net Architecture for Robust Speaker Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kilari%20Nikhil">Kilari Nikhil</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankur%20Tibrewal"> Ankur Tibrewal</a>, <a href="https://publications.waset.org/abstracts/search?q=Srinivas%20Kruthiventi%20S.%20S."> Srinivas Kruthiventi S. S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conventional speaker identification systems often fall short of capturing the diverse variations present in speech data due to fixed-scale architectures. In this research, we propose a CNN-based architecture, USENet, designed to overcome these limitations. Leveraging two key techniques, our approach achieves superior performance on the VoxCeleb 1 Dataset without any pre-training. Firstly, we adopt a U-net-inspired design to extract features at multiple scales, empowering our model to capture speech characteristics effectively. Secondly, we introduce the squeeze and excitation block to enhance spatial feature learning. The proposed architecture showcases significant advancements in speaker identification, outperforming existing methods, and holds promise for future research in this domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20feature%20extraction" title="multi-scale feature extraction">multi-scale feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=squeeze%20and%20excitation" title=" squeeze and excitation"> squeeze and excitation</a>, <a href="https://publications.waset.org/abstracts/search?q=VoxCeleb1%20speaker%20identification" title=" VoxCeleb1 speaker identification"> VoxCeleb1 speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=mel-spectrograms" title=" mel-spectrograms"> mel-spectrograms</a>, <a href="https://publications.waset.org/abstracts/search?q=USENet" title=" USENet"> USENet</a> </p> <a href="https://publications.waset.org/abstracts/170441/use-net-se-block-enhanced-u-net-architecture-for-robust-speaker-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19947</span> Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Surbhi%20Mathur">Surbhi Mathur</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20M.%20Vyas"> J. M. Vyas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Although the rapid development of forensic speaker recognition technology has been conducted, there are still many problems to be solved. The biggest problem arises when the cases involving disguised voice samples come across for the purpose of examination and identification. Such type of voice samples of anonymous callers is frequently encountered in crimes involving kidnapping, blackmailing, hoax extortion and many more, where the speaker makes a deliberate effort to manipulate their natural voice in order to conceal their identity due to the fear of being caught. Voice disguise causes serious damage to the natural vocal parameters of the speakers and thus complicates the process of identification. The sole objective of this doctoral project is to find out the possibility of rendering definite opinions in cases involving disguised speech by experimentally determining the effects of different disguise forms on personal identification and percentage rate of speaker recognition for various voice disguise techniques such as raised pitch, lower pitch, increased nasality, covering the mouth, constricting tract, obstacle in mouth etc by analyzing and comparing the amount of phonetic and acoustic variation in of artificial (disguised) and natural sample of an individual, by auditory as well as spectrographic analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=forensic" title="forensic">forensic</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=voice" title=" voice"> voice</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a>, <a href="https://publications.waset.org/abstracts/search?q=disguise" title=" disguise"> disguise</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a> </p> <a href="https://publications.waset.org/abstracts/47439/acoustic-analysis-for-comparison-and-identification-of-normal-and-disguised-speech-of-individuals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">369</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19946</span> Developed Text-Independent Speaker Verification System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Arif">Mohammed Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdessalam%20Kifouche"> Abdessalam Kifouche</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech is a very convenient way of communication between people and machines. It conveys information about the identity of the talker. Since speaker recognition technology is increasingly securing our everyday lives, the objective of this paper is to develop two automatic text-independent speaker verification systems (TI SV) using low-level spectral features and machine learning methods. (i) The first system is based on a support vector machine (SVM), which was widely used in voice signal processing with the aim of speaker recognition involving verifying the identity of the speaker based on its voice characteristics, and (ii) the second is based on Gaussian Mixture Model (GMM) and Universal Background Model (UBM) to combine different functions from different resources to implement the SVM based. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title="speaker verification">speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=text-independent" title=" text-independent"> text-independent</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20analysis" title=" cepstral analysis"> cepstral analysis</a> </p> <a href="https://publications.waset.org/abstracts/183493/developed-text-independent-speaker-verification-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">58</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19945</span> Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Ajgou">R. Ajgou</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sbaa"> S. Sbaa</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghendir"> S. Ghendir</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Chemsa"> A. Chemsa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Taleb-Ahmed"> A. Taleb-Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title="speech enhancement">speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=pesq" title=" pesq"> pesq</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a> </p> <a href="https://publications.waset.org/abstracts/31102/comparative-methods-for-speech-enhancement-and-the-effects-on-text-independent-speaker-identification-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19944</span> A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Alwosheel">Ahmad Alwosheel</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Alqaraawi"> Ahmed Alqaraawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=diarization" title=" diarization"> diarization</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20indexing" title=" speaker indexing"> speaker indexing</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20segmentation" title=" speaker segmentation"> speaker segmentation</a> </p> <a href="https://publications.waset.org/abstracts/27191/a-two-step-framework-for-unsupervised-speaker-segmentation-using-bic-and-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27191.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">502</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19943</span> Effect of Clinical Depression on Automatic Speaker Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sheeraz%20Memon">Sheeraz Memon</a>, <a href="https://publications.waset.org/abstracts/search?q=Namunu%20C.%20Maddage"> Namunu C. Maddage</a>, <a href="https://publications.waset.org/abstracts/search?q=Margaret%20Lech"> Margaret Lech</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicholas%20Allen"> Nicholas Allen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The effect of a clinical environment on the accuracy of the speaker verification was tested. The speaker verification tests were performed within homogeneous environments containing clinically depressed speakers only, and non-depresses speakers only, as well as within mixed environments containing different mixtures of both climatically depressed and non-depressed speakers. The speaker verification framework included the MFCCs features and the GMM modeling and classification method. The speaker verification experiments within homogeneous environments showed 5.1% increase of the EER within the clinically depressed environment when compared to the non-depressed environment. It indicated that the clinical depression increases the intra-speaker variability and makes the speaker verification task more challenging. Experiments with mixed environments indicated that the increase of the percentage of the depressed individuals within a mixed environment increases the speaker verification equal error rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title="speaker verification">speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=GMM" title=" GMM"> GMM</a>, <a href="https://publications.waset.org/abstracts/search?q=EM" title=" EM"> EM</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20environment" title=" clinical environment"> clinical environment</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20depression" title=" clinical depression"> clinical depression</a> </p> <a href="https://publications.waset.org/abstracts/39436/effect-of-clinical-depression-on-automatic-speaker-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19942</span> A Cross-Dialect Statistical Analysis of Final Declarative Intonation in Tuvinian</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Beziakina">D. Beziakina</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study continues the research on Tuvinian intonation and presents a general cross-dialect analysis of intonation of Tuvinian declarative utterances, specifically the character of the tone movement in order to test the hypothesis about the prevalence of level tone in some Tuvinian dialects. The results of the analysis of basic pitch characteristics of Tuvinian speech (in general and in comparison with two other Turkic languages - Uzbek and Azerbaijani) are also given in this paper. The goal of our work was to obtain the ranges of pitch parameter values typical for Tuvinian speech. Such language-specific values can be used in speaker identification systems in order to get more accurate results of ethnic speech analysis. We also present the results of a cross-dialect analysis of declarative intonation in the poorly studied Tuvinian language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20analysis" title="speech analysis">speech analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification%20of%20person" title=" identification of person"> identification of person</a> </p> <a href="https://publications.waset.org/abstracts/12497/a-cross-dialect-statistical-analysis-of-final-declarative-intonation-in-tuvinian" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19941</span> The Effect of The Speaker's Speaking Style as A Factor of Understanding and Comfort of The Listener</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Made%20Rahayu%20Putri%20Saron">Made Rahayu Putri Saron</a>, <a href="https://publications.waset.org/abstracts/search?q=Mochamad%20Nizar%20Palefi%20Ma%E2%80%99ady"> Mochamad Nizar Palefi Ma’ady</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication skills are important in everyday life, communication can be done verbally in the form of oral or written and nonverbal in the form of expressions or body movements. Good communication should be able to provide information clearly, and there is feedback from the speaker and listener. However, it is often found that the information conveyed is not clear, and there is no feedback from the listeners, so it cannot be ensured that the communication is effective and understandable. The speaker's understanding of the topic is one of the supporting factors for the listener to be able to accept the meaning of the conversation. However, based on the results of the literature review, it found that the influence factors of person speaking style are as follows: (i) environmental conditions; (ii) voice, articulation, and accent; (iii) gender; (iv) personality; (v) speech disorders (Dysarthria); when speaking also have an important influence on speaker’s speaking style. It can be concluded the factors that support understanding and comfort of the listener are dependent on the nature of the speaker (environmental conditions, voice, gender, personality) or also it the speaker have speech disorders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=listener" title="listener">listener</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20speaking" title=" public speaking"> public speaking</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking%20style" title=" speaking style"> speaking style</a>, <a href="https://publications.waset.org/abstracts/search?q=understanding" title=" understanding"> understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20comfortable%20factor" title=" and comfortable factor"> and comfortable factor</a> </p> <a href="https://publications.waset.org/abstracts/145442/the-effect-of-the-speakers-speaking-style-as-a-factor-of-understanding-and-comfort-of-the-listener" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19940</span> Multi-Modal Feature Fusion Network for Speaker Recognition Task</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiang%20Shijie">Xiang Shijie</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhou%20Dong"> Zhou Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tian%20Dan"> Tian Dan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaker recognition is a crucial task in the field of speech processing, aimed at identifying individuals based on their vocal characteristics. However, existing speaker recognition methods face numerous challenges. Traditional methods primarily rely on audio signals, which often suffer from limitations in noisy environments, variations in speaking style, and insufficient sample sizes. Additionally, relying solely on audio features can sometimes fail to capture the unique identity of the speaker comprehensively, impacting recognition accuracy. To address these issues, we propose a multi-modal network architecture that simultaneously processes both audio and text signals. By gradually integrating audio and text features, we leverage the strengths of both modalities to enhance the robustness and accuracy of speaker recognition. Our experiments demonstrate significant improvements with this multi-modal approach, particularly in complex environments, where recognition performance has been notably enhanced. Our research not only highlights the limitations of current speaker recognition methods but also showcases the effectiveness of multi-modal fusion techniques in overcoming these limitations, providing valuable insights for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title="feature fusion">feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=memory%20network" title=" memory network"> memory network</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20input" title=" multimodal input"> multimodal input</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a> </p> <a href="https://publications.waset.org/abstracts/191527/multi-modal-feature-fusion-network-for-speaker-recognition-task" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191527.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">33</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19939</span> Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyung-Jin%20Kim">Hyung-Jin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dae-Wan%20Kim"> Dae-Wan Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Moo-Yeon%20Lee"> Moo-Yeon Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heat%20transfer" title="heat transfer">heat transfer</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20coil" title=" voice coil"> voice coil</a>, <a href="https://publications.waset.org/abstracts/search?q=woofer%20speaker" title=" woofer speaker"> woofer speaker</a> </p> <a href="https://publications.waset.org/abstracts/5142/experimental-study-on-the-heat-transfer-characteristics-of-the-200w-class-woofer-speaker" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19938</span> Performance Evaluation of Acoustic-Spectrographic Voice Identification Method in Native and Non-Native Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Krasnova">E. Krasnova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Shchemelinin"> V. Shchemelinin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with acoustic-spectrographic voice identification method in terms of its performance in non-native language speech. Performance evaluation is conducted by comparing the result of the analysis of recordings containing native language speech with recordings that contain foreign language speech. Our research is based on Tajik and Russian speech of Tajik native speakers due to the character of the criminal situation with drug trafficking. We propose a pilot experiment that represents a primary attempt enter the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title="speaker identification">speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic-spectrographic%20method" title=" acoustic-spectrographic method"> acoustic-spectrographic method</a>, <a href="https://publications.waset.org/abstracts/search?q=non-native%20speech" title=" non-native speech"> non-native speech</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20evaluation" title=" performance evaluation"> performance evaluation</a> </p> <a href="https://publications.waset.org/abstracts/12496/performance-evaluation-of-acoustic-spectrographic-voice-identification-method-in-native-and-non-native-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19937</span> System Identification and Quantitative Feedback Theory Design of a Lathe Spindle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Khairudin">M. Khairudin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the system identification and design quantitative feedback theory (QFT) for the robust control of a lathe spindle. The dynamic of the lathe spindle is uncertain and time variation due to the deepness variation on cutting process. System identification was used to obtain the dynamics model of the lathe spindle. In this work, real time system identification is used to construct a linear model of the system from the nonlinear system. These linear models and its uncertainty bound can then be used for controller synthesis. The real time nonlinear system identification process to obtain a set of linear models of the lathe spindle that represents the operating ranges of the dynamic system. With a selected input signal, the data of output and response is acquired and nonlinear system identification is performed using Matlab to obtain a linear model of the system. Practical design steps are presented in which the QFT-based conditions are formulated to obtain a compensator and pre-filter to control the lathe spindle. The performances of the proposed controller are evaluated in terms of velocity responses of the the lathe machine spindle in corporating deepness on cutting process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lathe%20spindle" title="lathe spindle">lathe spindle</a>, <a href="https://publications.waset.org/abstracts/search?q=QFT" title=" QFT"> QFT</a>, <a href="https://publications.waset.org/abstracts/search?q=robust%20control" title=" robust control"> robust control</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a> </p> <a href="https://publications.waset.org/abstracts/20793/system-identification-and-quantitative-feedback-theory-design-of-a-lathe-spindle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20793.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">543</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19936</span> The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Javad%20Mollakazemi">Mohammad Javad Mollakazemi</a>, <a href="https://publications.waset.org/abstracts/search?q=Farhad%20Asadi"> Farhad Asadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Aref%20Ghafouri"> Aref Ghafouri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20process" title="Gaussian process">Gaussian process</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinearity%20distribution" title=" nonlinearity distribution"> nonlinearity distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a> </p> <a href="https://publications.waset.org/abstracts/17632/the-effect-of-measurement-distribution-on-system-identification-and-detection-of-behavior-of-nonlinearities-of-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17632.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19935</span> A Transform Domain Function Controlled VSSLMS Algorithm for Sparse System Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cemil%20Turan">Cemil Turan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Shukri%20Salman"> Mohammad Shukri Salman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The convergence rate of the least-mean-square (LMS) algorithm deteriorates if the input signal to the filter is correlated. In a system identification problem, this convergence rate can be improved if the signal is white and/or if the system is sparse. We recently proposed a sparse transform domain LMS-type algorithm that uses a variable step-size for a sparse system identification. The proposed algorithm provided high performance even if the input signal is highly correlated. In this work, we investigate the performance of the proposed TD-LMS algorithm for a large number of filter tap which is also a critical issue for standard LMS algorithm. Additionally, the optimum value of the most important parameter is calculated for all experiments. Moreover, the convergence analysis of the proposed algorithm is provided. The performance of the proposed algorithm has been compared to different algorithms in a sparse system identification setting of different sparsity levels and different number of filter taps. Simulations have shown that the proposed algorithm has prominent performance compared to the other algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptive%20filtering" title="adaptive filtering">adaptive filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20system%20identification" title=" sparse system identification"> sparse system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=TD-LMS%20algorithm" title=" TD-LMS algorithm"> TD-LMS algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=VSSLMS%20algorithm" title=" VSSLMS algorithm"> VSSLMS algorithm</a> </p> <a href="https://publications.waset.org/abstracts/72335/a-transform-domain-function-controlled-vsslms-algorithm-for-sparse-system-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72335.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19934</span> Ultracapacitor State-of-Energy Monitoring System with On-Line Parameter Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Reichbach">N. Reichbach</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Kuperman"> A. Kuperman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes a design of a monitoring system for super capacitor packs in propulsion systems, allowing determining the instantaneous energy capacity under power loading. The system contains real-time recursive-least-squares identification mechanism, estimating the values of pack capacitance and equivalent series resistance. These values are required for accurate calculation of the state-of-energy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time%20monitoring" title="real-time monitoring">real-time monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=RLS%20identification%20algorithm" title=" RLS identification algorithm"> RLS identification algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=state-of-energy" title=" state-of-energy"> state-of-energy</a>, <a href="https://publications.waset.org/abstracts/search?q=super%20capacitor" title=" super capacitor"> super capacitor</a> </p> <a href="https://publications.waset.org/abstracts/13043/ultracapacitor-state-of-energy-monitoring-system-with-on-line-parameter-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13043.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19933</span> Identification of Impact Load and Partial System Parameters Using 1D-CNN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xuewen%20Yu">Xuewen Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Danhui%20Dan"> Danhui Dan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The identification of impact load and some hard-to-obtain system parameters is crucial for the activities of analysis, validation, and evaluation in the engineering field. This paper proposes a method that utilizes neural networks based on 1D-CNN to identify the impact load and partial system parameters from measured responses. To this end, forward computations are conducted to provide datasets consisting of the triples (parameter θ, input u, output y). Then neural networks are trained to learn the mapping from input to output, fu|{θ} : y → u, as well as from input and output to parameter, fθ : (u, y) → θ. Afterward, feeding the trained neural networks the measured output response, the input impact load and system parameter can be calculated, respectively. The method is tested on two simulated examples and shows sound accuracy in estimating the impact load (waveform and location) and system parameters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=impact%20load%20identification" title=" impact load identification"> impact load identification</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20parameter%20identification" title=" system parameter identification"> system parameter identification</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20problem" title=" inverse problem"> inverse problem</a> </p> <a href="https://publications.waset.org/abstracts/173755/identification-of-impact-load-and-partial-system-parameters-using-1d-cnn" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173755.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19932</span> Kalman Filter Design in Structural Identification with Unknown Excitation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Z.%20Masoumi">Z. Masoumi</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Moaveni"> B. Moaveni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article is about first step of structural health monitoring by identifying structural system in the presence of unknown input. In the structural system identification, identification of structural parameters such as stiffness and damping are considered. In this study, the Kalman filter (KF) design for structural systems with unknown excitation is expressed. External excitations, such as earthquakes, wind or any other forces are not measured or not available. The purpose of this filter is its strengths to estimate the state variables of the system in the presence of unknown input. Also least squares estimation (LSE) method with unknown input is studied. Estimates of parameters have been adopted. Finally, using two examples advantages and drawbacks of both methods are studied. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter%20%28KF%29" title="Kalman filter (KF)">Kalman filter (KF)</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20square%20estimation%20%28LSE%29" title=" least square estimation (LSE)"> least square estimation (LSE)</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20health%20monitoring%20%28SHM%29" title=" structural health monitoring (SHM)"> structural health monitoring (SHM)</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20system%20identification" title=" structural system identification"> structural system identification</a> </p> <a href="https://publications.waset.org/abstracts/49817/kalman-filter-design-in-structural-identification-with-unknown-excitation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49817.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">317</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19931</span> Application of the Discrete Rationalized Haar Transform to Distributed Parameter System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joon-Hoon%20Park">Joon-Hoon Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper the rationalized Haar transform is applied for distributed parameter system identification and estimation. A distributed parameter system is a dynamical and mathematical model described by a partial differential equation. And system identification concerns the problem of determining mathematical models from observed data. The Haar function has some disadvantages of calculation because it contains irrational numbers, for these reasons the rationalized Haar function that has only rational numbers. The algorithm adopted in this paper is based on the transform and operational matrix of the rationalized Haar function. This approach provides more convenient and efficient computational results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distributed%20parameter%20system" title="distributed parameter system">distributed parameter system</a>, <a href="https://publications.waset.org/abstracts/search?q=rationalized%20Haar%20transform" title=" rationalized Haar transform"> rationalized Haar transform</a>, <a href="https://publications.waset.org/abstracts/search?q=operational%20matrix" title=" operational matrix"> operational matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification "> system identification </a> </p> <a href="https://publications.waset.org/abstracts/24246/application-of-the-discrete-rationalized-haar-transform-to-distributed-parameter-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24246.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">509</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19930</span> Modeling of a UAV Longitudinal Dynamics through System Identification Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Asadullah%20I.%20Qazi">Asadullah I. Qazi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mansoor%20Ahsan"> Mansoor Ahsan</a>, <a href="https://publications.waset.org/abstracts/search?q=Zahir%20Ashraf"> Zahir Ashraf</a>, <a href="https://publications.waset.org/abstracts/search?q=Uzair%20Ahmad"> Uzair Ahmad </a> </p> <p class="card-text"><strong>Abstract:</strong></p> System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc. This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fixed%20wing%20UAV" title="fixed wing UAV">fixed wing UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=black%20box%20modeling" title=" black box modeling"> black box modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=longitudinal%20dynamics" title=" longitudinal dynamics"> longitudinal dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20square%20error" title=" least square error"> least square error</a> </p> <a href="https://publications.waset.org/abstracts/70091/modeling-of-a-uav-longitudinal-dynamics-through-system-identification-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70091.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">325</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19929</span> Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Kamil%20Hasan%20Al-Ali">Ahmed Kamil Hasan Al-Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Bouchra%20Senadji"> Bouchra Senadji</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Naik"> Ganesh Naik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping-ICA achieves a reduction in equal error rate about (48.22%, 44.66%, and 50.07%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=noisy%20forensic%20speaker%20verification" title="noisy forensic speaker verification">noisy forensic speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=ICA%20algorithm" title=" ICA algorithm"> ICA algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC%20feature%20warping" title=" MFCC feature warping"> MFCC feature warping</a> </p> <a href="https://publications.waset.org/abstracts/66332/forensic-speaker-verification-in-noisy-environmental-by-enhancing-the-speech-signal-using-ica-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19928</span> Modified Form of Margin Based Angular Softmax Loss for Speaker Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamshaid%20ul%20Rahman">Jamshaid ul Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Akhter%20Ali"> Akhter Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Adnan%20Manzoor"> Adnan Manzoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learning-based systems have received increasing interest in recent years; recognition structures, including end-to-end speak recognition, are one of the hot topics in this area. A famous work on end-to-end speaker verification by using Angular Softmax Loss gained significant importance and is considered useful to directly trains a discriminative model instead of the traditional adopted i-vector approach. The margin-based strategy in angular softmax is beneficial to learn discriminative speaker embeddings where the random selection of margin values is a big issue in additive angular margin and multiplicative angular margin. As a better solution in this matter, we present an alternative approach by introducing a bit similar form of an additive parameter that was originally introduced for face recognition, and it has a capacity to adjust automatically with the corresponding margin values and is applicable to learn more discriminative features than the Softmax. Experiments are conducted on the part of Fisher dataset, where it observed that the additive parameter with angular softmax to train the front-end and probabilistic linear discriminant analysis (PLDA) in the back-end boosts the performance of the structure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=additive%20parameter" title="additive parameter">additive parameter</a>, <a href="https://publications.waset.org/abstracts/search?q=angular%20softmax" title=" angular softmax"> angular softmax</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title=" speaker verification"> speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=PLDA" title=" PLDA"> PLDA</a> </p> <a href="https://publications.waset.org/abstracts/152915/modified-form-of-margin-based-angular-softmax-loss-for-speaker-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19927</span> A Critical Discourse Analysis of President Muhammad Buhari's Speeches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joy%20Aworo-Okoroh">Joy Aworo-Okoroh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Politics is about trust and trust is challenged by the speaker’s ability to manipulate language before the electorate. Critical discourse analysis investigates the role of language in constructing social relationships between a political speaker and his audience. This paper explores the linguistic choices made by President Muhammad Buhari that enshrines his ideologies as well as the socio-political relations of power between him and Nigerians in his speeches. Two speeches of President Buhari –inaugural and Independence Day speeches are analyzed using Norman Fairclough’s perspective on Halliday’s Systemic functional grammar. The analysis is at two levels. The first level of analysis is the identification of transitivity and modality choices in the speeches and how they reveal the covert ideologies. The second analysis is premised on Normal Fairclough’s model, the clauses are analyzed to identify elements of power, hesistation, persuasion, threat and religious statement. It was discovered that Buhari is a dominant character who manipulates the material processes a lot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=politics" title="politics">politics</a>, <a href="https://publications.waset.org/abstracts/search?q=critical%20discourse%20analysis" title=" critical discourse analysis"> critical discourse analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=Norman%20Fairclough" title=" Norman Fairclough"> Norman Fairclough</a>, <a href="https://publications.waset.org/abstracts/search?q=systemic%20functional%20grammar" title=" systemic functional grammar"> systemic functional grammar</a> </p> <a href="https://publications.waset.org/abstracts/45028/a-critical-discourse-analysis-of-president-muhammad-buharis-speeches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45028.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">551</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19926</span> Digital Recording System Identification Based on Audio File</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michel%20Kulhandjian">Michel Kulhandjian</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimitris%20A.%20Pados"> Dimitris A. Pados</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification" title="blind system identification">blind system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20fingerprinting" title=" audio fingerprinting"> audio fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title=" blind deconvolution"> blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20dereverberation" title=" blind dereverberation"> blind dereverberation</a> </p> <a href="https://publications.waset.org/abstracts/75122/digital-recording-system-identification-based-on-audio-file" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19925</span> Application of Low-order Modeling Techniques and Neural-Network Based Models for System Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Venkatesh%20Pulletikurthi">Venkatesh Pulletikurthi</a>, <a href="https://publications.waset.org/abstracts/search?q=Karthik%20B.%20Ariyur"> Karthik B. Ariyur</a>, <a href="https://publications.waset.org/abstracts/search?q=Luciano%20Castillo"> Luciano Castillo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The system identification from the turbulence wakes will lead to the tactical advantage to prepare and also, to predict the trajectory of the opponents’ movements. A low-order modeling technique, POD, is used to predict the object based on the wake pattern and compared with pre-trained image recognition neural network (NN) to classify the wake patterns into objects. It is demonstrated that low-order modeling, POD, is able to predict the objects better compared to pretrained NN by ~30%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20bluff%20body%20wakes" title="the bluff body wakes">the bluff body wakes</a>, <a href="https://publications.waset.org/abstracts/search?q=low-order%20modeling" title=" low-order modeling"> low-order modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a> </p> <a href="https://publications.waset.org/abstracts/146168/application-of-low-order-modeling-techniques-and-neural-network-based-models-for-system-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146168.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">180</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19924</span> Smart Unmanned Parking System Based on Radio Frequency Identification Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yu%20Qin">Yu Qin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to tackle the ever-growing problem of the lack of parking space, this paper presents the design and implementation of a smart unmanned parking system that is based on RFID (radio frequency identification) technology and Wireless communication technology. This system uses RFID technology to achieve the identification function (transmitted by 2.4 G wireless module) and is equipped with an STM32L053 micro controller as the main control chip of the smart vehicle. This chip can accomplish automatic parking (in/out), charging and other functions. On this basis, it can also help users easily query the information that is stored in the database through the Internet. Experimental tests have shown that the system has the features of low power consumption and stable operation, among others. It can effectively improve the level of automation control of the parking lot management system and has enormous application prospects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RFID" title="RFID">RFID</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20system" title=" embedded system"> embedded system</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned" title=" unmanned"> unmanned</a>, <a href="https://publications.waset.org/abstracts/search?q=parking%20management" title=" parking management"> parking management</a> </p> <a href="https://publications.waset.org/abstracts/81174/smart-unmanned-parking-system-based-on-radio-frequency-identification-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81174.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19923</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19922</span> The Difference of Learning Outcomes in Reading Comprehension between Text and Film as The Media in Indonesian Language for Foreign Speaker in Intermediate Level</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siti%20Ayu%20Ningsih">Siti Ayu Ningsih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to find the differences outcomes in learning reading comprehension with text and film as media on Indonesian Language for foreign speaker (BIPA) learning at intermediate level. By using quantitative and qualitative research methods, the respondent of this study is a single respondent from D'Royal Morocco Integrative Islamic School in grade nine from secondary level. Quantitative method used to calculate the learning outcomes that have been given the appropriate action cycle, whereas qualitative method used to translate the findings derived from quantitative methods to be described. The technique used in this study is the observation techniques and testing work. Based on the research, it is known that the use of the text media is more effective than the film for intermediate level of Indonesian Language for foreign speaker learner. This is because, when using film the learner does not have enough time to take note the difficult vocabulary and don't have enough time to look for the meaning of the vocabulary from the dictionary. While the use of media texts shows the better effectiveness because it does not require additional time to take note the difficult words. For the words that are difficult or strange, the learner can immediately find its meaning from the dictionary. The presence of the text is also very helpful for Indonesian Language for foreign speaker learner to find the answers according to the questions more easily. By matching the vocabulary of the question into the text references. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indonesian%20language%20for%20foreign%20speaker" title="Indonesian language for foreign speaker">Indonesian language for foreign speaker</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20outcome" title=" learning outcome"> learning outcome</a>, <a href="https://publications.waset.org/abstracts/search?q=media" title=" media"> media</a>, <a href="https://publications.waset.org/abstracts/search?q=reading%20comprehension" title=" reading comprehension"> reading comprehension</a> </p> <a href="https://publications.waset.org/abstracts/82676/the-difference-of-learning-outcomes-in-reading-comprehension-between-text-and-film-as-the-media-in-indonesian-language-for-foreign-speaker-in-intermediate-level" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82676.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">197</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19921</span> An Automatic Speech Recognition of Conversational Telephone Speech in Malay Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Draman">M. Draman</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Z.%20Muhamad%20Yassin"> S. Z. Muhamad Yassin</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20S.%20Alias"> M. S. Alias</a>, <a href="https://publications.waset.org/abstracts/search?q=Z.%20Lambak"> Z. Lambak</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20I.%20Zulkifli"> M. I. Zulkifli</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20N.%20Padhi"> S. N. Padhi</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20N.%20Baharim"> K. N. Baharim</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Maskuriy"> F. Maskuriy</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20I.%20A.%20Rahim"> A. I. A. Rahim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The performance of Malay automatic speech recognition (ASR) system for the call centre environment is presented. The system utilizes Kaldi toolkit as the platform to the entire library and algorithm used in performing the ASR task. The acoustic model implemented in this system uses a deep neural network (DNN) method to model the acoustic signal and the standard (n-gram) model for language modelling. With 80 hours of training data from the call centre recordings, the ASR system can achieve 72% of accuracy that corresponds to 28% of word error rate (WER). The testing was done using 20 hours of audio data. Despite the implementation of DNN, the system shows a low accuracy owing to the varieties of noises, accent and dialect that typically occurs in Malaysian call centre environment. This significant variation of speakers is reflected by the large standard deviation of the average word error rate (WERav) (i.e., ~ 10%). It is observed that the lowest WER (13.8%) was obtained from recording sample with a standard Malay dialect (central Malaysia) of native speaker as compared to 49% of the sample with the highest WER that contains conversation of the speaker that uses non-standard Malay dialect. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conversational%20speech%20recognition" title="conversational speech recognition">conversational speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=Malay%20language" title=" Malay language"> Malay language</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a> </p> <a href="https://publications.waset.org/abstracts/93292/an-automatic-speech-recognition-of-conversational-telephone-speech-in-malay-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93292.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=664">664</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=665">665</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>