CINXE.COM

Search results for: speaker identification

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: speaker identification</title> <meta name="description" content="Search results for: speaker identification"> <meta name="keywords" content="speaker identification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="speaker identification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="speaker identification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3086</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: speaker identification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3086</span> USE-Net: SE-Block Enhanced U-Net Architecture for Robust Speaker Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kilari%20Nikhil">Kilari Nikhil</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankur%20Tibrewal"> Ankur Tibrewal</a>, <a href="https://publications.waset.org/abstracts/search?q=Srinivas%20Kruthiventi%20S.%20S."> Srinivas Kruthiventi S. S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conventional speaker identification systems often fall short of capturing the diverse variations present in speech data due to fixed-scale architectures. In this research, we propose a CNN-based architecture, USENet, designed to overcome these limitations. Leveraging two key techniques, our approach achieves superior performance on the VoxCeleb 1 Dataset without any pre-training. Firstly, we adopt a U-net-inspired design to extract features at multiple scales, empowering our model to capture speech characteristics effectively. Secondly, we introduce the squeeze and excitation block to enhance spatial feature learning. The proposed architecture showcases significant advancements in speaker identification, outperforming existing methods, and holds promise for future research in this domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20feature%20extraction" title="multi-scale feature extraction">multi-scale feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=squeeze%20and%20excitation" title=" squeeze and excitation"> squeeze and excitation</a>, <a href="https://publications.waset.org/abstracts/search?q=VoxCeleb1%20speaker%20identification" title=" VoxCeleb1 speaker identification"> VoxCeleb1 speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=mel-spectrograms" title=" mel-spectrograms"> mel-spectrograms</a>, <a href="https://publications.waset.org/abstracts/search?q=USENet" title=" USENet"> USENet</a> </p> <a href="https://publications.waset.org/abstracts/170441/use-net-se-block-enhanced-u-net-architecture-for-robust-speaker-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3085</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker&rsquo;s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extreme%20learning" title="extreme learning">extreme learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a> </p> <a href="https://publications.waset.org/abstracts/112384/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3084</span> Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Surbhi%20Mathur">Surbhi Mathur</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20M.%20Vyas"> J. M. Vyas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Although the rapid development of forensic speaker recognition technology has been conducted, there are still many problems to be solved. The biggest problem arises when the cases involving disguised voice samples come across for the purpose of examination and identification. Such type of voice samples of anonymous callers is frequently encountered in crimes involving kidnapping, blackmailing, hoax extortion and many more, where the speaker makes a deliberate effort to manipulate their natural voice in order to conceal their identity due to the fear of being caught. Voice disguise causes serious damage to the natural vocal parameters of the speakers and thus complicates the process of identification. The sole objective of this doctoral project is to find out the possibility of rendering definite opinions in cases involving disguised speech by experimentally determining the effects of different disguise forms on personal identification and percentage rate of speaker recognition for various voice disguise techniques such as raised pitch, lower pitch, increased nasality, covering the mouth, constricting tract, obstacle in mouth etc by analyzing and comparing the amount of phonetic and acoustic variation in of artificial (disguised) and natural sample of an individual, by auditory as well as spectrographic analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=forensic" title="forensic">forensic</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=voice" title=" voice"> voice</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a>, <a href="https://publications.waset.org/abstracts/search?q=disguise" title=" disguise"> disguise</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a> </p> <a href="https://publications.waset.org/abstracts/47439/acoustic-analysis-for-comparison-and-identification-of-normal-and-disguised-speech-of-individuals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3083</span> An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ben%20Soltane%20Cheima">Ben Soltane Cheima</a>, <a href="https://publications.waset.org/abstracts/search?q=Ittansa%20Yonas%20Kelbesa"> Ittansa Yonas Kelbesa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20modeling" title=" speaker modeling"> speaker modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20matching" title=" feature matching"> feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel%20frequency%20cepstrum%20coefficient%20%28MFCC%29" title=" Mel frequency cepstrum coefficient (MFCC)"> Mel frequency cepstrum coefficient (MFCC)</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model%20%28GMM%29" title=" Gaussian mixture model (GMM)"> Gaussian mixture model (GMM)</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20quantization%20%28VQ%29" title=" vector quantization (VQ)"> vector quantization (VQ)</a>, <a href="https://publications.waset.org/abstracts/search?q=Linde-Buzo-Gray%20%28LBG%29" title=" Linde-Buzo-Gray (LBG)"> Linde-Buzo-Gray (LBG)</a>, <a href="https://publications.waset.org/abstracts/search?q=expectation%20maximization%20%28EM%29" title=" expectation maximization (EM)"> expectation maximization (EM)</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-processing" title=" pre-processing"> pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detection%20%28VAD%29" title=" voice activity detection (VAD)"> voice activity detection (VAD)</a>, <a href="https://publications.waset.org/abstracts/search?q=short%20time%20energy%20%28STE%29" title=" short time energy (STE)"> short time energy (STE)</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20noise%20statistical%20modeling" title=" background noise statistical modeling"> background noise statistical modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29" title=" closed-set tex-independent speaker identification system (CISI)"> closed-set tex-independent speaker identification system (CISI)</a> </p> <a href="https://publications.waset.org/abstracts/16253/an-intelligent-text-independent-speaker-identification-using-vq-gmm-model-based-multiple-classifier-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">309</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3082</span> A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Alwosheel">Ahmad Alwosheel</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Alqaraawi"> Ahmed Alqaraawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=diarization" title=" diarization"> diarization</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20indexing" title=" speaker indexing"> speaker indexing</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20segmentation" title=" speaker segmentation"> speaker segmentation</a> </p> <a href="https://publications.waset.org/abstracts/27191/a-two-step-framework-for-unsupervised-speaker-segmentation-using-bic-and-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27191.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">502</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3081</span> Effect of Clinical Depression on Automatic Speaker Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sheeraz%20Memon">Sheeraz Memon</a>, <a href="https://publications.waset.org/abstracts/search?q=Namunu%20C.%20Maddage"> Namunu C. Maddage</a>, <a href="https://publications.waset.org/abstracts/search?q=Margaret%20Lech"> Margaret Lech</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicholas%20Allen"> Nicholas Allen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The effect of a clinical environment on the accuracy of the speaker verification was tested. The speaker verification tests were performed within homogeneous environments containing clinically depressed speakers only, and non-depresses speakers only, as well as within mixed environments containing different mixtures of both climatically depressed and non-depressed speakers. The speaker verification framework included the MFCCs features and the GMM modeling and classification method. The speaker verification experiments within homogeneous environments showed 5.1% increase of the EER within the clinically depressed environment when compared to the non-depressed environment. It indicated that the clinical depression increases the intra-speaker variability and makes the speaker verification task more challenging. Experiments with mixed environments indicated that the increase of the percentage of the depressed individuals within a mixed environment increases the speaker verification equal error rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title="speaker verification">speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=GMM" title=" GMM"> GMM</a>, <a href="https://publications.waset.org/abstracts/search?q=EM" title=" EM"> EM</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20environment" title=" clinical environment"> clinical environment</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20depression" title=" clinical depression"> clinical depression</a> </p> <a href="https://publications.waset.org/abstracts/39436/effect-of-clinical-depression-on-automatic-speaker-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3080</span> Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Ajgou">R. Ajgou</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sbaa"> S. Sbaa</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghendir"> S. Ghendir</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Chemsa"> A. Chemsa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Taleb-Ahmed"> A. Taleb-Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title="speech enhancement">speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=pesq" title=" pesq"> pesq</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a> </p> <a href="https://publications.waset.org/abstracts/31102/comparative-methods-for-speech-enhancement-and-the-effects-on-text-independent-speaker-identification-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3079</span> Developed Text-Independent Speaker Verification System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Arif">Mohammed Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdessalam%20Kifouche"> Abdessalam Kifouche</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech is a very convenient way of communication between people and machines. It conveys information about the identity of the talker. Since speaker recognition technology is increasingly securing our everyday lives, the objective of this paper is to develop two automatic text-independent speaker verification systems (TI SV) using low-level spectral features and machine learning methods. (i) The first system is based on a support vector machine (SVM), which was widely used in voice signal processing with the aim of speaker recognition involving verifying the identity of the speaker based on its voice characteristics, and (ii) the second is based on Gaussian Mixture Model (GMM) and Universal Background Model (UBM) to combine different functions from different resources to implement the SVM based. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title="speaker verification">speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=text-independent" title=" text-independent"> text-independent</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20analysis" title=" cepstral analysis"> cepstral analysis</a> </p> <a href="https://publications.waset.org/abstracts/183493/developed-text-independent-speaker-verification-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">58</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3078</span> A Cross-Dialect Statistical Analysis of Final Declarative Intonation in Tuvinian</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Beziakina">D. Beziakina</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study continues the research on Tuvinian intonation and presents a general cross-dialect analysis of intonation of Tuvinian declarative utterances, specifically the character of the tone movement in order to test the hypothesis about the prevalence of level tone in some Tuvinian dialects. The results of the analysis of basic pitch characteristics of Tuvinian speech (in general and in comparison with two other Turkic languages - Uzbek and Azerbaijani) are also given in this paper. The goal of our work was to obtain the ranges of pitch parameter values typical for Tuvinian speech. Such language-specific values can be used in speaker identification systems in order to get more accurate results of ethnic speech analysis. We also present the results of a cross-dialect analysis of declarative intonation in the poorly studied Tuvinian language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20analysis" title="speech analysis">speech analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification%20of%20person" title=" identification of person"> identification of person</a> </p> <a href="https://publications.waset.org/abstracts/12497/a-cross-dialect-statistical-analysis-of-final-declarative-intonation-in-tuvinian" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3077</span> The Effect of The Speaker&#039;s Speaking Style as A Factor of Understanding and Comfort of The Listener</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Made%20Rahayu%20Putri%20Saron">Made Rahayu Putri Saron</a>, <a href="https://publications.waset.org/abstracts/search?q=Mochamad%20Nizar%20Palefi%20Ma%E2%80%99ady"> Mochamad Nizar Palefi Ma’ady</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication skills are important in everyday life, communication can be done verbally in the form of oral or written and nonverbal in the form of expressions or body movements. Good communication should be able to provide information clearly, and there is feedback from the speaker and listener. However, it is often found that the information conveyed is not clear, and there is no feedback from the listeners, so it cannot be ensured that the communication is effective and understandable. The speaker's understanding of the topic is one of the supporting factors for the listener to be able to accept the meaning of the conversation. However, based on the results of the literature review, it found that the influence factors of person speaking style are as follows: (i) environmental conditions; (ii) voice, articulation, and accent; (iii) gender; (iv) personality; (v) speech disorders (Dysarthria); when speaking also have an important influence on speaker’s speaking style. It can be concluded the factors that support understanding and comfort of the listener are dependent on the nature of the speaker (environmental conditions, voice, gender, personality) or also it the speaker have speech disorders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=listener" title="listener">listener</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20speaking" title=" public speaking"> public speaking</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking%20style" title=" speaking style"> speaking style</a>, <a href="https://publications.waset.org/abstracts/search?q=understanding" title=" understanding"> understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20comfortable%20factor" title=" and comfortable factor"> and comfortable factor</a> </p> <a href="https://publications.waset.org/abstracts/145442/the-effect-of-the-speakers-speaking-style-as-a-factor-of-understanding-and-comfort-of-the-listener" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3076</span> Multi-Modal Feature Fusion Network for Speaker Recognition Task</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiang%20Shijie">Xiang Shijie</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhou%20Dong"> Zhou Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tian%20Dan"> Tian Dan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaker recognition is a crucial task in the field of speech processing, aimed at identifying individuals based on their vocal characteristics. However, existing speaker recognition methods face numerous challenges. Traditional methods primarily rely on audio signals, which often suffer from limitations in noisy environments, variations in speaking style, and insufficient sample sizes. Additionally, relying solely on audio features can sometimes fail to capture the unique identity of the speaker comprehensively, impacting recognition accuracy. To address these issues, we propose a multi-modal network architecture that simultaneously processes both audio and text signals. By gradually integrating audio and text features, we leverage the strengths of both modalities to enhance the robustness and accuracy of speaker recognition. Our experiments demonstrate significant improvements with this multi-modal approach, particularly in complex environments, where recognition performance has been notably enhanced. Our research not only highlights the limitations of current speaker recognition methods but also showcases the effectiveness of multi-modal fusion techniques in overcoming these limitations, providing valuable insights for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title="feature fusion">feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=memory%20network" title=" memory network"> memory network</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20input" title=" multimodal input"> multimodal input</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a> </p> <a href="https://publications.waset.org/abstracts/191527/multi-modal-feature-fusion-network-for-speaker-recognition-task" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191527.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">33</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3075</span> Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyung-Jin%20Kim">Hyung-Jin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dae-Wan%20Kim"> Dae-Wan Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Moo-Yeon%20Lee"> Moo-Yeon Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heat%20transfer" title="heat transfer">heat transfer</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20coil" title=" voice coil"> voice coil</a>, <a href="https://publications.waset.org/abstracts/search?q=woofer%20speaker" title=" woofer speaker"> woofer speaker</a> </p> <a href="https://publications.waset.org/abstracts/5142/experimental-study-on-the-heat-transfer-characteristics-of-the-200w-class-woofer-speaker" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3074</span> Performance Evaluation of Acoustic-Spectrographic Voice Identification Method in Native and Non-Native Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Krasnova">E. Krasnova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Shchemelinin"> V. Shchemelinin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with acoustic-spectrographic voice identification method in terms of its performance in non-native language speech. Performance evaluation is conducted by comparing the result of the analysis of recordings containing native language speech with recordings that contain foreign language speech. Our research is based on Tajik and Russian speech of Tajik native speakers due to the character of the criminal situation with drug trafficking. We propose a pilot experiment that represents a primary attempt enter the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title="speaker identification">speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic-spectrographic%20method" title=" acoustic-spectrographic method"> acoustic-spectrographic method</a>, <a href="https://publications.waset.org/abstracts/search?q=non-native%20speech" title=" non-native speech"> non-native speech</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20evaluation" title=" performance evaluation"> performance evaluation</a> </p> <a href="https://publications.waset.org/abstracts/12496/performance-evaluation-of-acoustic-spectrographic-voice-identification-method-in-native-and-non-native-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3073</span> Modified Form of Margin Based Angular Softmax Loss for Speaker Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamshaid%20ul%20Rahman">Jamshaid ul Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Akhter%20Ali"> Akhter Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Adnan%20Manzoor"> Adnan Manzoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learning-based systems have received increasing interest in recent years; recognition structures, including end-to-end speak recognition, are one of the hot topics in this area. A famous work on end-to-end speaker verification by using Angular Softmax Loss gained significant importance and is considered useful to directly trains a discriminative model instead of the traditional adopted i-vector approach. The margin-based strategy in angular softmax is beneficial to learn discriminative speaker embeddings where the random selection of margin values is a big issue in additive angular margin and multiplicative angular margin. As a better solution in this matter, we present an alternative approach by introducing a bit similar form of an additive parameter that was originally introduced for face recognition, and it has a capacity to adjust automatically with the corresponding margin values and is applicable to learn more discriminative features than the Softmax. Experiments are conducted on the part of Fisher dataset, where it observed that the additive parameter with angular softmax to train the front-end and probabilistic linear discriminant analysis (PLDA) in the back-end boosts the performance of the structure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=additive%20parameter" title="additive parameter">additive parameter</a>, <a href="https://publications.waset.org/abstracts/search?q=angular%20softmax" title=" angular softmax"> angular softmax</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title=" speaker verification"> speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=PLDA" title=" PLDA"> PLDA</a> </p> <a href="https://publications.waset.org/abstracts/152915/modified-form-of-margin-based-angular-softmax-loss-for-speaker-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3072</span> A Critical Discourse Analysis of President Muhammad Buhari&#039;s Speeches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joy%20Aworo-Okoroh">Joy Aworo-Okoroh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Politics is about trust and trust is challenged by the speaker’s ability to manipulate language before the electorate. Critical discourse analysis investigates the role of language in constructing social relationships between a political speaker and his audience. This paper explores the linguistic choices made by President Muhammad Buhari that enshrines his ideologies as well as the socio-political relations of power between him and Nigerians in his speeches. Two speeches of President Buhari –inaugural and Independence Day speeches are analyzed using Norman Fairclough’s perspective on Halliday’s Systemic functional grammar. The analysis is at two levels. The first level of analysis is the identification of transitivity and modality choices in the speeches and how they reveal the covert ideologies. The second analysis is premised on Normal Fairclough’s model, the clauses are analyzed to identify elements of power, hesistation, persuasion, threat and religious statement. It was discovered that Buhari is a dominant character who manipulates the material processes a lot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=politics" title="politics">politics</a>, <a href="https://publications.waset.org/abstracts/search?q=critical%20discourse%20analysis" title=" critical discourse analysis"> critical discourse analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=Norman%20Fairclough" title=" Norman Fairclough"> Norman Fairclough</a>, <a href="https://publications.waset.org/abstracts/search?q=systemic%20functional%20grammar" title=" systemic functional grammar"> systemic functional grammar</a> </p> <a href="https://publications.waset.org/abstracts/45028/a-critical-discourse-analysis-of-president-muhammad-buharis-speeches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45028.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">551</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3071</span> The Difference of Learning Outcomes in Reading Comprehension between Text and Film as The Media in Indonesian Language for Foreign Speaker in Intermediate Level</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siti%20Ayu%20Ningsih">Siti Ayu Ningsih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to find the differences outcomes in learning reading comprehension with text and film as media on Indonesian Language for foreign speaker (BIPA) learning at intermediate level. By using quantitative and qualitative research methods, the respondent of this study is a single respondent from D'Royal Morocco Integrative Islamic School in grade nine from secondary level. Quantitative method used to calculate the learning outcomes that have been given the appropriate action cycle, whereas qualitative method used to translate the findings derived from quantitative methods to be described. The technique used in this study is the observation techniques and testing work. Based on the research, it is known that the use of the text media is more effective than the film for intermediate level of Indonesian Language for foreign speaker learner. This is because, when using film the learner does not have enough time to take note the difficult vocabulary and don't have enough time to look for the meaning of the vocabulary from the dictionary. While the use of media texts shows the better effectiveness because it does not require additional time to take note the difficult words. For the words that are difficult or strange, the learner can immediately find its meaning from the dictionary. The presence of the text is also very helpful for Indonesian Language for foreign speaker learner to find the answers according to the questions more easily. By matching the vocabulary of the question into the text references. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indonesian%20language%20for%20foreign%20speaker" title="Indonesian language for foreign speaker">Indonesian language for foreign speaker</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20outcome" title=" learning outcome"> learning outcome</a>, <a href="https://publications.waset.org/abstracts/search?q=media" title=" media"> media</a>, <a href="https://publications.waset.org/abstracts/search?q=reading%20comprehension" title=" reading comprehension"> reading comprehension</a> </p> <a href="https://publications.waset.org/abstracts/82676/the-difference-of-learning-outcomes-in-reading-comprehension-between-text-and-film-as-the-media-in-indonesian-language-for-foreign-speaker-in-intermediate-level" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82676.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">197</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3070</span> Studying Second Language Learners&#039; Language Behavior from Conversation Analysis Perspective</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yanyan%20Wang">Yanyan Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper on second language teaching and learning uses conversation analysis (CA) approach and focuses on how second language learners of Chinese do repair when making clarification requests. In order to demonstrate their behavior in interaction, a comparison was made to study the differences between native speakers of Chinese with non-native speakers of Chinese. The significance of the research is to make second language teachers and learners aware of repair and how to seek clarification. Utilizing the methodology of CA, the research involved two sets of naturally occurring recordings, one of native speaker students and the other of non-native speaker students. Both sets of recording were telephone talks between students and teachers. There were 50 native speaker students and 50 non-native speaker students. From multiple listening to the recordings, the parts with repairs for clarification were selected for analysis which included the moments in the talk when students had problems in understanding or hearing the speaker and had to seek clarification. For example, ‘Sorry, I do not understand ‘and ‘Can you repeat the question? ‘were the parts as repair to make clarification requests. In the data, there were 43 such cases from native speaker students and 88 cases from non-native speaker students. The non-native speaker students were more likely to use repair to seek clarification. Analysis on how the students make clarification requests during their conversation was carried out by investigating how the students initiated problems and how the teachers repaired the problems. In CA term, it is called other-initiated self-repair (OISR), which refers to student-initiated teacher-repair in this research. The findings show that, in initiating repair, native speaker students pay more attention to mutual understanding (inter-subjectivity) while non-native speaker students, due to their lack of language proficiency, pay more attention to their status of knowledge (epistemic) switch. There are three major differences: 1, native Chinese students more often initiate closed-class OISR (seeking specific information in the request) such as repeating a word or phrases from the previous turn while non-native students more frequently initiate open-class OISR (not specifying clarification) such as ‘sorry, I don’t understand ‘. 2, native speakers’ clarification requests are treated by the teacher as understanding of the content while non-native learners’ clarification requests are treated by teacher as language proficiency problem. 3, native speakers don’t see repair as knowledge issue and there is no third position in the repair sequences to close repair while non-native learners take repair sequence as a time to adjust their knowledge. There is clear closing third position token such as ‘oh ‘ to close repair sequence so that the topic can go back. In conclusion, this paper uses conversation analysis approach to compare differences between native Chinese speakers and non-native Chinese learners in their ways of conducting repair when making clarification requests. The findings are useful in future Chinese language teaching and learning, especially in teaching pragmatics such as requests. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conversation%20analysis%20%28CA%29" title="conversation analysis (CA)">conversation analysis (CA)</a>, <a href="https://publications.waset.org/abstracts/search?q=clarification%20request" title=" clarification request"> clarification request</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20%28L2%29" title=" second language (L2)"> second language (L2)</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching%20implication" title=" teaching implication"> teaching implication</a> </p> <a href="https://publications.waset.org/abstracts/74368/studying-second-language-learners-language-behavior-from-conversation-analysis-perspective" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74368.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3069</span> The Effect of Iconic and Beat Gestures on Memory Recall in Greek’s First and Second Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eleni%20Ioanna%20Levantinou">Eleni Ioanna Levantinou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gestures play a major role in comprehension and memory recall due to the fact that aid the efficient channel of the meaning and support listeners&rsquo; comprehension and memory. In the present study, the assistance of two kinds of gestures (iconic and beat gestures) is tested in regards to memory and recall. The hypothesis investigated here is whether or not iconic and beat gestures provide assistance in memory and recall in Greek and in Greek speakers&rsquo; second language. Two groups of participants were formed, one comprising Greeks that reside in Athens and one with Greeks that reside in Copenhagen. Three kinds of stimuli were used: A video with words accompanied with iconic gestures, a video with words accompanied with beat gestures and a video with words alone. The languages used are Greek and English. The words in the English videos were spoken by a native English speaker and by a Greek speaker talking English. The reason for this is that when it comes to beat gestures that serve a meta-cognitive function and are generated according to the intonation of a language, prosody plays a major role. Thus, participants that have different influences in prosody may generate different results from rhythmic gestures. Memory recall was assessed by asking the participants to try to remember as many words as they could after viewing each video. Results show that iconic gestures provide significant assistance in memory and recall in Greek and in English whether they are produced by a native or a second language speaker. In the case of beat gestures though, the findings indicate that beat gestures may not play such a significant role in Greek language. As far as intonation is concerned, a significant difference was not found in the case of beat gestures produced by a native English speaker and by a Greek speaker talking English. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=first%20language" title="first language">first language</a>, <a href="https://publications.waset.org/abstracts/search?q=gestures" title=" gestures"> gestures</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20acquisition" title=" second language acquisition"> second language acquisition</a> </p> <a href="https://publications.waset.org/abstracts/49317/the-effect-of-iconic-and-beat-gestures-on-memory-recall-in-greeks-first-and-second-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49317.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3068</span> A Cross-Gender Statistical Analysis of Tuvinian Intonation Features in Comparison With Uzbek and Azerbaijani</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daria%20Beziakina">Daria Beziakina</a>, <a href="https://publications.waset.org/abstracts/search?q=Elena%20Bulgakova"> Elena Bulgakova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with cross-gender and cross-linguistic comparison of pitch characteristics for Tuvinian with two other Turkic languages - Uzbek and Azerbaijani, based on the results of statistical analysis of pitch parameter values and intonation patterns used by male and female speakers. The main goal of our work is to obtain the ranges of pitch parameter values typical for Tuvinian speakers for the purpose of automatic language identification. We also propose a cross-gender analysis of declarative intonation in the poorly studied Tuvinian language. The ranges of pitch parameter values were obtained by means of specially developed software that deals with the distribution of pitch values and allows us to obtain statistical language-specific pitch intervals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20analysis" title="speech analysis">speech analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification%20of%20person" title=" identification of person"> identification of person</a> </p> <a href="https://publications.waset.org/abstracts/8047/a-cross-gender-statistical-analysis-of-tuvinian-intonation-features-in-comparison-with-uzbek-and-azerbaijani" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3067</span> Multiple Identity Construction among Multilingual Minorities: A Quantitative Sociolinguistic Case Study </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefanie%20Siebenh%C3%BCtter">Stefanie Siebenhütter</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to reveal criterions involved in the process of identity-forming among multilingual minority language speakers in Northeastern Thailand and in the capital Bangkok. Using sociolinguistic interviews and questionnaires, it is asked which factors are important for speakers and how they define their identity by their interactions socially as well as linguistically. One key question to answer is how sociolinguistic factors may force or diminish the process of forming social identity of multilingual minority speakers. However, the motivation for specific language use is rarely overt to the speaker’s themselves as well as to others. Therefore, identifying the intentions included in the process of identity construction is to approach by scrutinizing speaker’s behavior and attitudes. Combining methods used in sociolinguistics and social psychology allows uncovering the tools for identity construction that ethnic Kui uses to range themselves within a multilingual setting. By giving an overview of minority speaker’s language use in context of the specific border near multilingual situation and asking how speakers construe identity within this spatial context, the results exhibit some of the subtle and mostly unconscious criterions involved in the ongoing process of identity construction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=social%20identity" title="social identity">social identity</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20construction" title=" identity construction"> identity construction</a>, <a href="https://publications.waset.org/abstracts/search?q=minority%20language" title=" minority language"> minority language</a>, <a href="https://publications.waset.org/abstracts/search?q=multilingualism" title=" multilingualism"> multilingualism</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20networks" title=" social networks"> social networks</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20boundaries" title=" social boundaries"> social boundaries</a> </p> <a href="https://publications.waset.org/abstracts/114208/multiple-identity-construction-among-multilingual-minorities-a-quantitative-sociolinguistic-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114208.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3066</span> Heritage Spanish Speaker’s Bilingual Practices and Linguistic Varieties: Challenges and Opportunities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ana%20C.%20Sanchez">Ana C. Sanchez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper will discuss some of the bilingual practices of Heritage Spanish speakers caused by living within two cultures and two languages, Spanish, the heritage language, and English, the dominant language. When two languages remain in contact for long periods, such as the case of Spanish and English, it is common that both languages can be affected by bilingual practices such as Spanglish, code-switching, borrowing, anglicisms and calques. Examples of these translingual practices will be provided, as well as HS speaker’s linguistic dialects, and the challenges they encounter with the standard variety used in the Spanish classroom. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heritage" title="heritage">heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=practices" title=" practices"> practices</a>, <a href="https://publications.waset.org/abstracts/search?q=Spanish" title=" Spanish"> Spanish</a>, <a href="https://publications.waset.org/abstracts/search?q=speakers%20translingual" title=" speakers translingual"> speakers translingual</a> </p> <a href="https://publications.waset.org/abstracts/143699/heritage-spanish-speakers-bilingual-practices-and-linguistic-varieties-challenges-and-opportunities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143699.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">208</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3065</span> Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Kamil%20Hasan%20Al-Ali">Ahmed Kamil Hasan Al-Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Bouchra%20Senadji"> Bouchra Senadji</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Naik"> Ganesh Naik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping-ICA achieves a reduction in equal error rate about (48.22%, 44.66%, and 50.07%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=noisy%20forensic%20speaker%20verification" title="noisy forensic speaker verification">noisy forensic speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=ICA%20algorithm" title=" ICA algorithm"> ICA algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC%20feature%20warping" title=" MFCC feature warping"> MFCC feature warping</a> </p> <a href="https://publications.waset.org/abstracts/66332/forensic-speaker-verification-in-noisy-environmental-by-enhancing-the-speech-signal-using-ica-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3064</span> Disability, Stigma and In-Group Identification: An Exploration across Different Disability Subgroups</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sharmila%20Rathee">Sharmila Rathee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Individuals with disability/ies often face negative attitudes, discrimination, exclusion, and inequality of treatment due to stigmatization and stigmatized treatment. While a significant number of studies in field of stigma suggest that group-identification has positive consequences for stigmatized individuals, ironically very miniscule empirical work in sight has attempted to investigate in-group identification as a coping measure against stigma, humiliation and related experiences among disability group. In view of death of empirical research on in-group identification among disability group, through present work, an attempt has been made to examine the experiences of stigma, humiliation, and in-group identification among disability group. Results of the study suggest that use of in-group identification as a coping strategy is not uniform across members of disability group and degree of in-group identification differs across different sub-groups of disability groups. Further, in-group identification among members of disability group depends on variables like degree and impact of disability, factors like onset of disability, nature, and visibility of disability, educational experiences and resources available to deal with disabling conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disability" title="disability">disability</a>, <a href="https://publications.waset.org/abstracts/search?q=stigma" title=" stigma"> stigma</a>, <a href="https://publications.waset.org/abstracts/search?q=in-group%20identification" title=" in-group identification"> in-group identification</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20identity" title=" social identity"> social identity</a> </p> <a href="https://publications.waset.org/abstracts/48888/disability-stigma-and-in-group-identification-an-exploration-across-different-disability-subgroups" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48888.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">324</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3063</span> Forensic Challenges in Source Device Identification for Digital Videos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mustapha%20Aminu%20Bagiwa">Mustapha Aminu Bagiwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ainuddin%20Wahid%20Abdul%20Wahab"> Ainuddin Wahid Abdul Wahab</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Yamani%20Idna%20Idris"> Mohd Yamani Idna Idris</a>, <a href="https://publications.waset.org/abstracts/search?q=Suleman%20Khan"> Suleman Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video source device identification has become a problem of concern in numerous domains especially in multimedia security and digital investigation. This is because videos are now used as evidence in legal proceedings. Source device identification aim at identifying the source of digital devices using the content they produced. However, due to affordable processing tools and the influx in digital content generating devices, source device identification is still a major problem within the digital forensic community. In this paper, we discuss source device identification for digital videos by identifying techniques that were proposed in the literature for model or specific device identification. This is aimed at identifying salient open challenges for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20forgery" title="video forgery">video forgery</a>, <a href="https://publications.waset.org/abstracts/search?q=source%20camcorder" title=" source camcorder"> source camcorder</a>, <a href="https://publications.waset.org/abstracts/search?q=device%20identification" title=" device identification"> device identification</a>, <a href="https://publications.waset.org/abstracts/search?q=forgery%20detection" title=" forgery detection "> forgery detection </a> </p> <a href="https://publications.waset.org/abstracts/21641/forensic-challenges-in-source-device-identification-for-digital-videos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21641.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">631</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3062</span> Identification of Dynamic Friction Model for High-Precision Motion Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Martin%20Goubej">Martin Goubej</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomas%20Popule"> Tomas Popule</a>, <a href="https://publications.waset.org/abstracts/search?q=Alois%20Krejci"> Alois Krejci</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with experimental identification of mechanical systems with nonlinear friction characteristics. Dynamic LuGre friction model is adopted and a systematic approach to parameter identification of both linear and nonlinear subsystems is given. The identification procedure consists of three subsequent experiments which deal with the individual parts of plant dynamics. The proposed method is experimentally verified on an industrial-grade robotic manipulator. Model fidelity is compared with the results achieved with a static friction model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mechanical%20friction" title="mechanical friction">mechanical friction</a>, <a href="https://publications.waset.org/abstracts/search?q=LuGre%20model" title=" LuGre model"> LuGre model</a>, <a href="https://publications.waset.org/abstracts/search?q=friction%20identification" title=" friction identification"> friction identification</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20control" title=" motion control"> motion control</a> </p> <a href="https://publications.waset.org/abstracts/51897/identification-of-dynamic-friction-model-for-high-precision-motion-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51897.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3061</span> Identification of Nonlinear Systems Structured by Hammerstein-Wiener Model </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Brouri">A. Brouri</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Giri"> F. Giri</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Mkhida"> A. Mkhida</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Elkarkri"> A. Elkarkri</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20L.%20Chhibat"> M. L. Chhibat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Standard Hammerstein-Wiener models consist of a linear subsystem sandwiched by two memoryless nonlinearities. Presently, the linear subsystem is allowed to be parametric or not, continuous- or discrete-time. The input and output nonlinearities are polynomial and may be noninvertible. A two-stage identification method is developed such the parameters of all nonlinear elements are estimated first using the Kozen-Landau polynomial decomposition algorithm. The obtained estimates are then based upon in the identification of the linear subsystem, making use of suitable pre-ad post-compensators. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20system%20identification" title="nonlinear system identification">nonlinear system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=Hammerstein-Wiener%20systems" title=" Hammerstein-Wiener systems"> Hammerstein-Wiener systems</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20identification" title=" frequency identification"> frequency identification</a>, <a href="https://publications.waset.org/abstracts/search?q=polynomial%20decomposition" title=" polynomial decomposition"> polynomial decomposition</a> </p> <a href="https://publications.waset.org/abstracts/7969/identification-of-nonlinear-systems-structured-by-hammerstein-wiener-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">511</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3060</span> Structural Damage Detection Using Sensors Optimally Located</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Alberto%20Riveros">Carlos Alberto Riveros</a>, <a href="https://publications.waset.org/abstracts/search?q=Edwin%20Fabi%C3%A1n%20Garc%C3%ADa"> Edwin Fabián García</a>, <a href="https://publications.waset.org/abstracts/search?q=Javier%20Enrique%20Rivero"> Javier Enrique Rivero</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The measured data obtained from sensors in continuous monitoring of civil structures are mainly used for modal identification and damage detection. Therefore when modal identification analysis is carried out the quality in the identification of the modes will highly influence the damage detection results. It is also widely recognized that the usefulness of the measured data used for modal identification and damage detection is significantly influenced by the number and locations of sensors. The objective of this study is the numerical implementation of two widely known optimum sensor placement methods in beam-like structures <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optimum%20sensor%20placement" title="optimum sensor placement">optimum sensor placement</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20damage%20detection" title=" structural damage detection"> structural damage detection</a>, <a href="https://publications.waset.org/abstracts/search?q=modal%20identification" title=" modal identification"> modal identification</a>, <a href="https://publications.waset.org/abstracts/search?q=beam-like%20structures." title=" beam-like structures. "> beam-like structures. </a> </p> <a href="https://publications.waset.org/abstracts/15240/structural-damage-detection-using-sensors-optimally-located" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3059</span> American Slang: Perception and Connotations – Issues of Translation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lison%20Carlier">Lison Carlier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The English language that is taught in school or used in media nowadays is defined as 'standard English,' although unstandardized Englishes, or 'parallel' Englishes, are practiced throughout the world. The existence of these 'parallel' Englishes has challenged standardization by imposing its own specific vocabulary or grammar. These non-standard languages tend to be regarded as inferior and, therefore, pose a problem regarding their translation. In the USA, 'slanguage', or slang, is a good example of a 'parallel' language. It consists of a particular set of vocabulary, used mostly in speech, and rarely in writing. Qualified as vulgar, often reduced to an urban language spoken by young people from lower classes, slanguage – or the language that is often first spoken between youths – is still the most common language used in the English-speaking world. Moreover, it appears that the prime meaning of 'informal' (as in an informal language) – a language that is spoken with persons the speaker knows – has been put aside and replaced in the general mind by the idea of vulgarity and non-appropriateness, when in fact informality is a sign of intimacy, not of vulgarity. When it comes to translating American slang, the main problem a translator encounters is the image and the cultural background usually associated with this 'parallel' language. Indeed, one will have, unwillingly, a predisposition to categorize a speaker of a 'parallel' language as being part of a particular group of people. The way one sees a speaker using it is paramount, and needs to be transposed into the target language. This paper will conduct an analysis of American slang – its use, perception and the image it gives of its speakers – and its translation into French, using the novel Is Everyone Hanging Out Without Me? (and other concerns) by way of example. In her autobiography/personal essay book, comedy writer, actress and author Mindy Kaling speaks with a very familiar English, including slang, which participates in the construction of her own voice and style, and enables a deeper connection with her readers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=translation" title="translation">translation</a>, <a href="https://publications.waset.org/abstracts/search?q=English" title=" English"> English</a>, <a href="https://publications.waset.org/abstracts/search?q=slang" title=" slang"> slang</a>, <a href="https://publications.waset.org/abstracts/search?q=French" title=" French"> French</a> </p> <a href="https://publications.waset.org/abstracts/60197/american-slang-perception-and-connotations-issues-of-translation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60197.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3058</span> Self-Tuning Robot Control Based on Subspace Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mathias%20Marquardt">Mathias Marquardt</a>, <a href="https://publications.waset.org/abstracts/search?q=Peter%20D%C3%BCnow"> Peter Dünow</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Ba%C3%9Fler"> Sandra Baßler</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes the use of subspace based identification methods for auto tuning of a state space control system. The plant is an unstable but self balancing transport robot. Because of the unstable character of the process it has to be identified from closed loop input-output data. Based on the identified model a state space controller combined with an observer is calculated. The subspace identification algorithm and the controller design procedure is combined to a auto tuning method. The capability of the approach was verified in a simulation experiments under different process conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auto%20tuning" title="auto tuning">auto tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=balanced%20robot" title=" balanced robot"> balanced robot</a>, <a href="https://publications.waset.org/abstracts/search?q=closed%20loop%20identification" title=" closed loop identification"> closed loop identification</a>, <a href="https://publications.waset.org/abstracts/search?q=subspace%20identification" title=" subspace identification"> subspace identification</a> </p> <a href="https://publications.waset.org/abstracts/49108/self-tuning-robot-control-based-on-subspace-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49108.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3057</span> Distributed Perceptually Important Point Identification for Time Series Data Mining</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tak-Chung%20Fu">Tak-Chung Fu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying-Kit%20Hung"> Ying-Kit Hung</a>, <a href="https://publications.waset.org/abstracts/search?q=Fu-Lai%20Chung"> Fu-Lai Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distributed%20computing" title="distributed computing">distributed computing</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20analysis" title=" performance analysis"> performance analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=Perceptually%20Important%20Point%20identification" title=" Perceptually Important Point identification"> Perceptually Important Point identification</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series%20data%20mining" title=" time series data mining"> time series data mining</a> </p> <a href="https://publications.waset.org/abstracts/84358/distributed-perceptually-important-point-identification-for-time-series-data-mining" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84358.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=102">102</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=103">103</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20identification&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10