CINXE.COM
Search results for: Speech Production
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Speech Production</title> <meta name="description" content="Search results for: Speech Production"> <meta name="keywords" content="Speech Production"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Speech Production" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Speech Production"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2287</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Speech Production</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2287</span> Transformation of Vocal Characteristics: A Review of Literature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dong-Yan%20Huang">Dong-Yan Huang</a>, <a href="https://publications.waset.org/search?q=Ee%20Ping%20Ong"> Ee Ping Ong</a>, <a href="https://publications.waset.org/search?q=Susanto%20Rahardja"> Susanto Rahardja</a>, <a href="https://publications.waset.org/search?q=Minghui%20Dong"> Minghui Dong</a>, <a href="https://publications.waset.org/search?q=Haizhou%20Li"> Haizhou Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Voice%20transformation" title="Voice transformation">Voice transformation</a>, <a href="https://publications.waset.org/search?q=Voice%20Quality" title=" Voice Quality"> Voice Quality</a>, <a href="https://publications.waset.org/search?q=Emotion" title=" Emotion"> Emotion</a>, <a href="https://publications.waset.org/search?q=Individuality" title="Individuality">Individuality</a>, <a href="https://publications.waset.org/search?q=Speaking%20Style" title=" Speaking Style"> Speaking Style</a>, <a href="https://publications.waset.org/search?q=Speech%20Production" title=" Speech Production"> Speech Production</a>, <a href="https://publications.waset.org/search?q=Speech%20Perception." title=" Speech Perception."> Speech Perception.</a> </p> <a href="https://publications.waset.org/4782/transformation-of-vocal-characteristics-a-review-of-literature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4782/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4782/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4782/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4782/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4782/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4782/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4782/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4782/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4782/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4782/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4782.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2043</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2286</span> Automatic Recognition of Emotionally Coloured Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Theologos%20Athanaselis">Theologos Athanaselis</a>, <a href="https://publications.waset.org/search?q=Stelios%20Bakamidis"> Stelios Bakamidis</a>, <a href="https://publications.waset.org/search?q=Ioannis%20Dologlou"> Ioannis Dologlou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Emotion in speech is an issue that has been attracting the interest of the speech community for many years, both in the context of speech synthesis as well as in automatic speech recognition (ASR). In spite of the remarkable recent progress in Large Vocabulary Recognition (LVR), it is still far behind the ultimate goal of recognising free conversational speech uttered by any speaker in any environment. Current experimental tests prove that using state of the art large vocabulary recognition systems the error rate increases substantially when applied to spontaneous/emotional speech. This paper shows that recognition rate for emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Statistical%20language%20model" title="Statistical language model">Statistical language model</a>, <a href="https://publications.waset.org/search?q=N-grams" title=" N-grams"> N-grams</a>, <a href="https://publications.waset.org/search?q=emotionallycoloured%20speech" title=" emotionallycoloured speech"> emotionallycoloured speech</a> </p> <a href="https://publications.waset.org/1891/automatic-recognition-of-emotionally-coloured-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1891/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1891/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1891/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1891/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1891/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1891/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1891/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1891/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1891/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1891/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1891.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1618</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2285</span> Effect of Visual Speech in Sign Speech Synthesis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Zdenek%20Krnoul">Zdenek Krnoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This article investigates a contribution of synthesized visual speech. Synthesis of visual speech expressed by a computer consists in an animation in particular movements of lips. Visual speech is also necessary part of the non-manual component of a sign language. Appropriate methodology is proposed to determine the quality and the accuracy of synthesized visual speech. Proposed methodology is inspected on Czech speech. Hence, this article presents a procedure of recording of speech data in order to set a synthesis system as well as to evaluate synthesized speech. Furthermore, one option of the evaluation process is elaborated in the form of a perceptual test. This test procedure is verified on the measured data with two settings of the synthesis system. The results of the perceptual test are presented as a statistically significant increase of intelligibility evoked by real and synthesized visual speech. Now, the aim is to show one part of evaluation process which leads to more comprehensive evaluation of the sign speech synthesis system.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Perception%20test" title="Perception test">Perception test</a>, <a href="https://publications.waset.org/search?q=Sign%20speech%20synthesis" title=" Sign speech synthesis"> Sign speech synthesis</a>, <a href="https://publications.waset.org/search?q=Talking%20head" title=" Talking head"> Talking head</a>, <a href="https://publications.waset.org/search?q=Visual%20speech." title=" Visual speech."> Visual speech.</a> </p> <a href="https://publications.waset.org/4362/effect-of-visual-speech-in-sign-speech-synthesis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4362/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4362/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4362/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4362/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4362/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4362/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4362/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4362/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4362/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4362/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1477</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2284</span> The Main Principles of Text-to-Speech Synthesis System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=K.R.%20Aida%E2%80%93Zade">K.R. Aida–Zade</a>, <a href="https://publications.waset.org/search?q=C.%20Ardil"> C. Ardil</a>, <a href="https://publications.waset.org/search?q=A.M.%20Sharifova"> A.M. Sharifova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, the main principles of text-to-speech synthesis system are presented. Associated problems which arise when developing speech synthesis system are described. Used approaches and their application in the speech synthesis systems for Azerbaijani language are shown.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=synthesis%20of%20Azerbaijani%20language" title="synthesis of Azerbaijani language">synthesis of Azerbaijani language</a>, <a href="https://publications.waset.org/search?q=morphemes" title=" morphemes"> morphemes</a>, <a href="https://publications.waset.org/search?q=phonemes" title="phonemes">phonemes</a>, <a href="https://publications.waset.org/search?q=sounds" title=" sounds"> sounds</a>, <a href="https://publications.waset.org/search?q=sentence" title=" sentence"> sentence</a>, <a href="https://publications.waset.org/search?q=speech%20synthesizer" title=" speech synthesizer"> speech synthesizer</a>, <a href="https://publications.waset.org/search?q=intonation" title=" intonation"> intonation</a>, <a href="https://publications.waset.org/search?q=accent" title=" accent"> accent</a>, <a href="https://publications.waset.org/search?q=pronunciation." title="pronunciation.">pronunciation.</a> </p> <a href="https://publications.waset.org/8303/the-main-principles-of-text-to-speech-synthesis-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8303/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8303/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8303/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8303/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8303/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8303/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8303/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8303/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8303/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8303/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8303.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">5652</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2283</span> TeleMe Speech Booster: Web-Based Speech Therapy and Training Program for Children with Articulation Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=C.%20Treerattanaphan">C. Treerattanaphan</a>, <a href="https://publications.waset.org/search?q=P.%20Boonpramuk"> P. Boonpramuk</a>, <a href="https://publications.waset.org/search?q=P.%20Singla"> P. Singla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Frequent, continuous speech training has proven to be a necessary part of a successful speech therapy process, but constraints of traveling time and employment dispensation become key obstacles especially for individuals living in remote areas or for dependent children who have working parents. In order to ameliorate speech difficulties with ample guidance from speech therapists, a website has been developed that supports speech therapy and training for people with articulation disorders in the standard Thai language. This web-based program has the ability to record speech training exercises for each speech trainee. The records will be stored in a database for the speech therapist to investigate, evaluate, compare and keep track of all trainees’ progress in detail. Speech trainees can request live discussions via video conference call when needed. Communication through this web-based program facilitates and reduces training time in comparison to walk-in training or appointments. This type of training also allows people with articulation disorders to practice speech lessons whenever or wherever is convenient for them, which can lead to a more regular training processes.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Web-Based%20Remote%20Training%20Program" title="Web-Based Remote Training Program">Web-Based Remote Training Program</a>, <a href="https://publications.waset.org/search?q=Thai%20Speech%0D%0ATherapy" title=" Thai Speech Therapy"> Thai Speech Therapy</a>, <a href="https://publications.waset.org/search?q=Articulation%20Disorders." title=" Articulation Disorders."> Articulation Disorders.</a> </p> <a href="https://publications.waset.org/9999541/teleme-speech-booster-web-based-speech-therapy-and-training-program-for-children-with-articulation-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999541/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999541/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999541/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999541/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999541/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999541/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999541/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999541/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999541/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999541/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999541.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1859</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2282</span> Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hai%20Quang%20Hong%20Dam">Hai Quang Hong Dam</a>, <a href="https://publications.waset.org/search?q=Hai%20Ho"> Hai Ho</a>, <a href="https://publications.waset.org/search?q=Minh%20Hoang%20Le%20Ngo"> Minh Hoang Le Ngo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Blind%20speech%20separation" title="Blind speech separation">Blind speech separation</a>, <a href="https://publications.waset.org/search?q=voice%20activity%20detector" title=" voice activity detector"> voice activity detector</a>, <a href="https://publications.waset.org/search?q=SRP-PHAT" title=" SRP-PHAT"> SRP-PHAT</a>, <a href="https://publications.waset.org/search?q=optimal%20beamformer." title=" optimal beamformer."> optimal beamformer.</a> </p> <a href="https://publications.waset.org/10005265/blind-speech-separation-using-srp-phat-localization-and-optimal-beamformer-in-two-speaker-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10005265/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10005265/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10005265/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10005265/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10005265/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10005265/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10005265/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10005265/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10005265/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10005265/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10005265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1388</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2281</span> Evaluation of a Multi-Resolution Dyadic Wavelet Transform Method for usable Speech Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Wajdi%20Ghezaiel">Wajdi Ghezaiel</a>, <a href="https://publications.waset.org/search?q=Amel%20Ben%20Slimane%20Rahmouni"> Amel Ben Slimane Rahmouni</a>, <a href="https://publications.waset.org/search?q=Ezzedine%20Ben%20Braiek"> Ezzedine Ben Braiek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many applications of speech communication and speaker identification suffer from the problem of co-channel speech. This paper deals with a multi-resolution dyadic wavelet transform method for usable segments of co-channel speech detection that could be processed by a speaker identification system. Evaluation of this method is performed on TIMIT database referring to the Target to Interferer Ratio measure. Co-channel speech is constructed by mixing all possible gender speakers. Results do not show much difference for different mixtures. For the overall mixtures 95.76% of usable speech is correctly detected with false alarms of 29.65%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Co-channel%20speech" title="Co-channel speech">Co-channel speech</a>, <a href="https://publications.waset.org/search?q=usable%20speech" title=" usable speech"> usable speech</a>, <a href="https://publications.waset.org/search?q=multi-resolutionanalysis" title=" multi-resolutionanalysis"> multi-resolutionanalysis</a>, <a href="https://publications.waset.org/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a> </p> <a href="https://publications.waset.org/9967/evaluation-of-a-multi-resolution-dyadic-wavelet-transform-method-for-usable-speech-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9967/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9967/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9967/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9967/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9967/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9967/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9967/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9967/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9967/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9967/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1366</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2280</span> Narrowband Speech Hiding using Vector Quantization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Driss%20Guerchi">Driss Guerchi</a>, <a href="https://publications.waset.org/search?q=Fatiha%20Djebbar"> Fatiha Djebbar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work we introduce an efficient method to limit the impact of the hiding process on the quality of the cover speech. Vector quantization of the speech spectral information reduces drastically the number of the secret speech parameters to be embedded in the cover signal. Compared to scalar hiding, vector quantization hiding technique provides a stego signal that is indistinguishable from the cover speech. The objective and subjective performance measures reveal that the current hiding technique attracts no suspicion about the presence of the secret message in the stego speech, while being able to recover an intelligible copy of the secret message at the receiver side. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20steganography" title="Speech steganography">Speech steganography</a>, <a href="https://publications.waset.org/search?q=LSF%20vector%20quantization" title=" LSF vector quantization"> LSF vector quantization</a>, <a href="https://publications.waset.org/search?q=fast%0AFourier%20transform" title=" fast Fourier transform"> fast Fourier transform</a> </p> <a href="https://publications.waset.org/12059/narrowband-speech-hiding-using-vector-quantization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12059/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12059/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12059/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12059/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12059/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12059/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12059/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12059/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12059/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12059/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12059.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1515</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2279</span> Bangla Vowel Characterization Based on Analysis by Synthesis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Syed%20Akhter%20Hossain">Syed Akhter Hossain</a>, <a href="https://publications.waset.org/search?q=M.%20Lutfar%20Rahman"> M. Lutfar Rahman</a>, <a href="https://publications.waset.org/search?q=Farruk%20Ahmed"> Farruk Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Bangla Vowel characterization determines the spectral properties of Bangla vowels for efficient synthesis as well as recognition of Bangla vowels. In this paper, Bangla vowels in isolated word have been analyzed based on speech production model within the framework of Analysis-by-Synthesis. This has led to the extraction of spectral parameters for the production model in order to produce different Bangla vowel sounds. The real and synthetic spectra are compared and a weighted square error has been computed along with the error in the formant bandwidths for efficient representation of Bangla vowels. The extracted features produced good representation of targeted Bangla vowel. Such a representation also plays essential role in low bit rate speech coding and vocoders.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech" title="Speech">Speech</a>, <a href="https://publications.waset.org/search?q=vowel" title=" vowel"> vowel</a>, <a href="https://publications.waset.org/search?q=formant" title=" formant"> formant</a>, <a href="https://publications.waset.org/search?q=synthesis" title=" synthesis"> synthesis</a>, <a href="https://publications.waset.org/search?q=spectrum" title=" spectrum"> spectrum</a>, <a href="https://publications.waset.org/search?q=LPC." title=" LPC."> LPC.</a> </p> <a href="https://publications.waset.org/13237/bangla-vowel-characterization-based-on-analysis-by-synthesis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13237/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13237/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13237/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13237/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13237/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13237/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13237/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13237/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13237/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13237/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13237.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2371</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2278</span> Using Teager Energy Cepstrum and HMM distancesin Automatic Speech Recognition and Analysis of Unvoiced Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Panikos%20Heracleous">Panikos Heracleous</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, the use of silicon NAM (Non-Audible Murmur) microphone in automatic speech recognition is presented. NAM microphones are special acoustic sensors, which are attached behind the talker-s ear and can capture not only normal (audible) speech, but also very quietly uttered speech (non-audible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech conversion etc.) for sound-impaired people. Using a small amount of training data and adaptation approaches, 93.9% word accuracy was achieved for a 20k Japanese vocabulary dictation task. Non-audible murmur recognition in noisy environments is also investigated. In this study, further analysis of the NAM speech has been made using distance measures between hidden Markov model (HMM) pairs. It has been shown the reduced spectral space of NAM speech using a metric distance, however the location of the different phonemes of NAM are similar to the location of the phonemes of normal speech, and the NAM sounds are well discriminated. Promising results in using nonlinear features are also introduced, especially under noisy conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20recognition" title="Speech recognition">Speech recognition</a>, <a href="https://publications.waset.org/search?q=unvoiced%20speech" title=" unvoiced speech"> unvoiced speech</a>, <a href="https://publications.waset.org/search?q=nonlinear%20features" title=" nonlinear features"> nonlinear features</a>, <a href="https://publications.waset.org/search?q=HMM%20distance%20measures" title="HMM distance measures">HMM distance measures</a> </p> <a href="https://publications.waset.org/1838/using-teager-energy-cepstrum-and-hmm-distancesin-automatic-speech-recognition-and-analysis-of-unvoiced-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1838/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1838/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1838/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1838/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1838/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1838/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1838/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1838/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1838/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1838/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1838.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1647</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2277</span> Analysis of Combined Use of NN and MFCC for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Safdar%20Tanweer">Safdar Tanweer</a>, <a href="https://publications.waset.org/search?q=Abdul%20Mobin"> Abdul Mobin</a>, <a href="https://publications.waset.org/search?q=Afshar%20Alam"> Afshar Alam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The performance and analysis of speech recognition system is illustrated in this paper. An approach to recognize the English word corresponding to digit (0-9) spoken by 2 different speakers is captured in noise free environment. For feature extraction, speech Mel frequency cepstral coefficients (MFCC) has been used which gives a set of feature vectors from recorded speech samples. Neural network model is used to enhance the recognition performance. Feed forward neural network with back propagation algorithm model is used. However other speech recognition techniques such as HMM, DTW exist. All experiments are carried out on Matlab.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20Recognition" title="Speech Recognition">Speech Recognition</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=Neural%20Network" title=" Neural Network"> Neural Network</a>, <a href="https://publications.waset.org/search?q=classifier." title=" classifier. "> classifier. </a> </p> <a href="https://publications.waset.org/10000797/analysis-of-combined-use-of-nn-and-mfcc-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000797/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000797/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000797/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000797/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000797/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000797/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000797/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000797/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000797/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000797/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000797.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3268</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2276</span> On SNR Estimation by the Likelihood of near Pitch for Speech Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Young-Hwan%20Song">Young-Hwan Song</a>, <a href="https://publications.waset.org/search?q=Doo-Heon%20Kyun"> Doo-Heon Kyun</a>, <a href="https://publications.waset.org/search?q=Jong-Kuk%20Kim"> Jong-Kuk Kim</a>, <a href="https://publications.waset.org/search?q=Myung-Jin%20Bae"> Myung-Jin Bae</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>People have the habitual pitch level which is used when people say something generally. However this pitch should be changed irregularly in the presence of noise. So it is useful to estimate SNR of speech signal by pitch. In this paper, we obtain the energy of input speech signal and then we detect a stationary region on voiced speech. And we get the pitch period by NAMDF for the stationary region that is not varied pitch rapidly. After getting pitch, each frame is divided by pitch period and the likelihood of closed pitch is estimated. In this paper, we proposed new parameter, NLF, to estimate the SNR of received speech signal. The NLF is derived from the correlation of near pitch periods. The NLF is obtained for each stationary region in voiced speech. Finally we confirmed good performance of the estimation of the SNR of received input speech in the presence of noise.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Likelihood" title="Likelihood">Likelihood</a>, <a href="https://publications.waset.org/search?q=pitch" title=" pitch"> pitch</a>, <a href="https://publications.waset.org/search?q=SNR" title=" SNR"> SNR</a>, <a href="https://publications.waset.org/search?q=speech." title=" speech."> speech.</a> </p> <a href="https://publications.waset.org/15219/on-snr-estimation-by-the-likelihood-of-near-pitch-for-speech-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/15219/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/15219/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/15219/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/15219/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/15219/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/15219/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/15219/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/15219/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/15219/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/15219/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/15219.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1575</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2275</span> Speech Impact Realization via Manipulative Argumentation Techniques in Modern American Political Discourse</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Zarine%20Avetisyan">Zarine Avetisyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present paper presents the discussion of scholars concerning speech impact, peculiarities of its realization, speech strategies and techniques in particular. Departing from the viewpoints of many prominent linguists, the paper suggests that manipulative argumentation be viewed as a most pervasive speech strategy with a certain set of techniques which are to be found in modern American political discourse. The precedence of their occurrence allows us to regard them as pragmatic patterns of speech impact realization in effective public speaking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Manipulative%20argumentation" title="Manipulative argumentation">Manipulative argumentation</a>, <a href="https://publications.waset.org/search?q=political%20discourse" title=" political discourse"> political discourse</a>, <a href="https://publications.waset.org/search?q=speech%20impact" title=" speech impact"> speech impact</a>, <a href="https://publications.waset.org/search?q=technique." title=" technique."> technique.</a> </p> <a href="https://publications.waset.org/10001419/speech-impact-realization-via-manipulative-argumentation-techniques-in-modern-american-political-discourse" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10001419/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10001419/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10001419/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10001419/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10001419/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10001419/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10001419/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10001419/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10001419/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10001419/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10001419.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2289</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2274</span> Extracting Tongue Shape Dynamics from Magnetic Resonance Image Sequences </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mar%C3%ADa%20S.%20Avila-Garc%C3%ADa">María S. Avila-García</a>, <a href="https://publications.waset.org/search?q=John%20N.%20Carter"> John N. Carter</a>, <a href="https://publications.waset.org/search?q=Robert%20I.%20Damper"> Robert I. Damper</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>An important problem in speech research is the automatic extraction of information about the shape and dimensions of the vocal tract during real-time speech production. We have previously developed Southampton dynamic magnetic resonance imaging (SDMRI) as an approach to the solution of this problem.However, the SDMRI images are very noisy so that shape extraction is a major challenge. In this paper, we address the problem of tongue shape extraction, which poses difficulties because this is a highly deforming non-parametric shape. We show that combining active shape models with the dynamic Hough transform allows the tongue shape to be reliably tracked in the image sequence.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Vocal%20tract%20imaging" title="Vocal tract imaging">Vocal tract imaging</a>, <a href="https://publications.waset.org/search?q=speech%20production" title=" speech production"> speech production</a>, <a href="https://publications.waset.org/search?q=active%20shapemodels" title=" active shapemodels"> active shapemodels</a>, <a href="https://publications.waset.org/search?q=dynamic%20Hough%20transform" title=" dynamic Hough transform"> dynamic Hough transform</a>, <a href="https://publications.waset.org/search?q=object%20tracking." title=" object tracking."> object tracking.</a> </p> <a href="https://publications.waset.org/9680/extracting-tongue-shape-dynamics-from-magnetic-resonance-image-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9680/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9680/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9680/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9680/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9680/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9680/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9680/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9680/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9680/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9680/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9680.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1735</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2273</span> Speech Enhancement Using Kalman Filter in Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Eng.%20Alaa%20K.%20Satti%20Salih">Eng. Alaa K. Satti Salih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the <em>speech enhancement</em>, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Autoregressive%20Process" title="Autoregressive Process">Autoregressive Process</a>, <a href="https://publications.waset.org/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/search?q=Matlab%20and%20Noise%20speech." title=" Matlab and Noise speech."> Matlab and Noise speech.</a> </p> <a href="https://publications.waset.org/9998076/speech-enhancement-using-kalman-filter-in-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998076/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998076/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998076/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998076/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998076/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998076/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998076/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998076/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998076/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998076/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998076.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">4025</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2272</span> Automatic Segmentation of the Clean Speech Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20A.%20Ben%20Messaoud">M. A. Ben Messaoud</a>, <a href="https://publications.waset.org/search?q=A.%20Bouzid"> A. Bouzid</a>, <a href="https://publications.waset.org/search?q=N.%20Ellouze"> N. Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The MP is based on making the product of the speech wavelet transform coefficients (WTC). We have estimated our method on the Keele database. The results show the effectiveness of our method. It indicates that the two features can find word boundaries, and extracted the segments of the clean speech.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20segmentation" title="Speech segmentation">Speech segmentation</a>, <a href="https://publications.waset.org/search?q=Multi-scale%20product" title=" Multi-scale product"> Multi-scale product</a>, <a href="https://publications.waset.org/search?q=Spectral%0D%0Acentroid" title=" Spectral centroid"> Spectral centroid</a>, <a href="https://publications.waset.org/search?q=Zero%20crossings%20rate." title=" Zero crossings rate."> Zero crossings rate.</a> </p> <a href="https://publications.waset.org/10000638/automatic-segmentation-of-the-clean-speech-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000638/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000638/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000638/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000638/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000638/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000638/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000638/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000638/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000638/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000638/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000638.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2508</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2271</span> Optimum Cascaded Design for Speech Enhancement Using Kalman Filter</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=T.%20Kishore%20Kumar">T. Kishore Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech enhancement is the process of eliminating noise and increasing the quality of a speech signal, which is contaminated with other kinds of distortions. This paper is on developing an optimum cascaded system for speech enhancement. This aim is attained without diminishing any relevant speech information and without much computational and time complexity. LMS algorithm, Spectral Subtraction and Kalman filter have been deployed as the main de-noising algorithms in this work. Since these algorithms suffer from respective shortcomings, this work has been undertaken to design cascaded systems in different combinations and the evaluation of such cascades by qualitative (listening) and quantitative (SNR) tests. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=LMS" title="LMS">LMS</a>, <a href="https://publications.waset.org/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/search?q=Speech%20Enhancement%20and%0ASpectral%20Subtraction." title=" Speech Enhancement and Spectral Subtraction."> Speech Enhancement and Spectral Subtraction.</a> </p> <a href="https://publications.waset.org/15307/optimum-cascaded-design-for-speech-enhancement-using-kalman-filter" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/15307/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/15307/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/15307/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/15307/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/15307/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/15307/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/15307/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/15307/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/15307/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/15307/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/15307.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1733</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2270</span> Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Van%20Nhan%20Nguyen">Van Nhan Nguyen</a>, <a href="https://publications.waset.org/search?q=Harald%20Holone"> Harald Holone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Automatic%20Speech%20Recognition" title="Automatic Speech Recognition">Automatic Speech Recognition</a>, <a href="https://publications.waset.org/search?q=ASR" title=" ASR"> ASR</a>, <a href="https://publications.waset.org/search?q=Air%20Traffic%0D%0AControl" title=" Air Traffic Control"> Air Traffic Control</a>, <a href="https://publications.waset.org/search?q=ATC." title=" ATC."> ATC.</a> </p> <a href="https://publications.waset.org/10002245/possibilities-challenges-and-the-state-of-the-art-of-automatic-speech-recognition-in-air-traffic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002245/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002245/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002245/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002245/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002245/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002245/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002245/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002245/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002245/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002245/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002245.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">4043</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2269</span> Speech Data Compression using Vector Quantization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=H.%20B.%20Kekre">H. B. Kekre</a>, <a href="https://publications.waset.org/search?q=Tanuja%20K.%20Sarode"> Tanuja K. Sarode</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table shows computational complexity of these three algorithms. Here we have introduced a new performance parameter Average Fractional Change in Speech Sample (AFCSS). Our FCG algorithm gives far better performance considering mean absolute error, AFCSS and complexity as compared to others. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Vector%20Quantization" title="Vector Quantization">Vector Quantization</a>, <a href="https://publications.waset.org/search?q=Data%20Compression" title=" Data Compression"> Data Compression</a>, <a href="https://publications.waset.org/search?q=Encoding" title=" Encoding"> Encoding</a>, <a href="https://publications.waset.org/search?q=" title=""></a>, <a href="https://publications.waset.org/search?q=Speech%20coding." title=" Speech coding."> Speech coding.</a> </p> <a href="https://publications.waset.org/15794/speech-data-compression-using-vector-quantization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/15794/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/15794/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/15794/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/15794/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/15794/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/15794/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/15794/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/15794/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/15794/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/15794/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/15794.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2404</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2268</span> Performance Evaluation of Acoustic-Spectrographic Voice Identification Method in Native and Non-Native Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=E.%20Krasnova">E. Krasnova</a>, <a href="https://publications.waset.org/search?q=E.%20Bulgakova"> E. Bulgakova</a>, <a href="https://publications.waset.org/search?q=V.%20Shchemelinin"> V. Shchemelinin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with acoustic-spectrographic voice identification method in terms of its performance in non-native language speech. Performance evaluation is conducted by comparing the result of the analysis of recordings containing native language speech with recordings that contain foreign language speech. Our research is based on Tajik and Russian speech of Tajik native speakers due to the character of the criminal situation with drug trafficking. We propose a pilot experiment that represents a primary attempt enter the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speaker%20identification" title="Speaker identification">Speaker identification</a>, <a href="https://publications.waset.org/search?q=acoustic-spectrographic%0D%0Amethod" title=" acoustic-spectrographic method"> acoustic-spectrographic method</a>, <a href="https://publications.waset.org/search?q=non-native%20speech." title=" non-native speech."> non-native speech.</a> </p> <a href="https://publications.waset.org/10004820/performance-evaluation-of-acoustic-spectrographic-voice-identification-method-in-native-and-non-native-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004820/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004820/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004820/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004820/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004820/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004820/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004820/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004820/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004820/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004820/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004820.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">866</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2267</span> Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nazaket%20Gazieva">Nazaket Gazieva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometric%20voice%20prints" title="Biometric voice prints">Biometric voice prints</a>, <a href="https://publications.waset.org/search?q=fundamental%20frequency" title=" fundamental frequency"> fundamental frequency</a>, <a href="https://publications.waset.org/search?q=phonogram" title=" phonogram"> phonogram</a>, <a href="https://publications.waset.org/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/search?q=temporal%20characteristics." title=" temporal characteristics."> temporal characteristics.</a> </p> <a href="https://publications.waset.org/10011375/minimum-data-of-a-speech-signal-as-special-indicators-of-identification-in-phonoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011375/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011375/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011375/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011375/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011375/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011375/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011375/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011375/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011375/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011375/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011375.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">577</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2266</span> High-Individuality Voice Conversion Based on Concatenative Speech Synthesis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kei%20Fujii">Kei Fujii</a>, <a href="https://publications.waset.org/search?q=Jun%20Okawa"> Jun Okawa</a>, <a href="https://publications.waset.org/search?q=Kaori%20Suigetsu"> Kaori Suigetsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Concatenative speech synthesis is a method that can make speech sound which has naturalness and high-individuality of a speaker by introducing a large speech corpus. Based on this method, in this paper, we propose a voice conversion method whose conversion speech has high-individuality and naturalness. The authors also have two subjective evaluation experiments for evaluating individuality and sound quality of conversion speech. From the results, following three facts have be confirmed: (a) the proposal method can convert the individuality of speakers well, (b) employing the framework of unit selection (especially join cost) of concatenative speech synthesis into conventional voice conversion improves the sound quality of conversion speech, and (c) the proposal method is robust against the difference of genders between a source speaker and a target speaker. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=concatenative%20speech%20synthesis" title="concatenative speech synthesis">concatenative speech synthesis</a>, <a href="https://publications.waset.org/search?q=join%20cost" title=" join cost"> join cost</a>, <a href="https://publications.waset.org/search?q=speaker%20individuality" title=" speaker individuality"> speaker individuality</a>, <a href="https://publications.waset.org/search?q=unit%20selection" title=" unit selection"> unit selection</a>, <a href="https://publications.waset.org/search?q=voice%20conversion" title=" voice conversion"> voice conversion</a> </p> <a href="https://publications.waset.org/1272/high-individuality-voice-conversion-based-on-concatenative-speech-synthesis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1272/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1272/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1272/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1272/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1272/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1272/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1272/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1272/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1272/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1272/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1272.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1939</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2265</span> Assamese Numeral Corpus for Speech Recognition using Cooperative ANN Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mousmita%20Sarma">Mousmita Sarma</a>, <a href="https://publications.waset.org/search?q=Krishna%20Dutta"> Krishna Dutta</a>, <a href="https://publications.waset.org/search?q=Kandarpa%20Kumar%20Sarma"> Kandarpa Kumar Sarma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech corpus is one of the major components in a Speech Processing System where one of the primary requirements is to recognize an input sample. The quality and details captured in speech corpus directly affects the precision of recognition. The current work proposes a platform for speech corpus generation using an adaptive LMS filter and LPC cepstrum, as a part of an ANN based Speech Recognition System which is exclusively designed to recognize isolated numerals of Assamese language- a major language in the North Eastern part of India. The work focuses on designing an optimal feature extraction block and a few ANN based cooperative architectures so that the performance of the Speech Recognition System can be improved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Filter" title="Filter">Filter</a>, <a href="https://publications.waset.org/search?q=Feature" title=" Feature"> Feature</a>, <a href="https://publications.waset.org/search?q=LMS" title=" LMS"> LMS</a>, <a href="https://publications.waset.org/search?q=LPC" title=" LPC"> LPC</a>, <a href="https://publications.waset.org/search?q=Cepstrum" title=" Cepstrum"> Cepstrum</a>, <a href="https://publications.waset.org/search?q=ANN." title=" ANN."> ANN.</a> </p> <a href="https://publications.waset.org/334/assamese-numeral-corpus-for-speech-recognition-using-cooperative-ann-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/334/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/334/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/334/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/334/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/334/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/334/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/334/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/334/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/334/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/334/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/334.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2385</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2264</span> The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fawaz%20S.%20Al-Anzi">Fawaz S. Al-Anzi</a>, <a href="https://publications.waset.org/search?q=Dia%20AbuZeina"> Dia AbuZeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20recognition" title="Speech recognition">Speech recognition</a>, <a href="https://publications.waset.org/search?q=acoustic%20features" title=" acoustic features"> acoustic features</a>, <a href="https://publications.waset.org/search?q=Mel%20Frequency%20Cepstral%20Coefficients." title=" Mel Frequency Cepstral Coefficients."> Mel Frequency Cepstral Coefficients.</a> </p> <a href="https://publications.waset.org/10008047/the-capacity-of-mel-frequency-cepstral-coefficients-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008047/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008047/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008047/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008047/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008047/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008047/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008047/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008047/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008047/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008047/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1973</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2263</span> Voice Features as the Diagnostic Marker of Autism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Elena%20Lyakso">Elena Lyakso</a>, <a href="https://publications.waset.org/search?q=Olga%20Frolova"> Olga Frolova</a>, <a href="https://publications.waset.org/search?q=Yuri%20Matveev"> Yuri Matveev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The aim of the study is to determine the acoustic features of voice and speech of children with autism spectrum disorders (ASD) as a possible additional diagnostic criterion. The participants in the study were 95 children with ASD aged 5-16 years, 150 typically development (TD) children, and 103 adults – listening to children’s speech samples. Three types of experimental methods for speech analysis were performed: spectrographic, perceptual by listeners, and automatic recognition. In the speech of children with ASD, the pitch values, pitch range, values of frequency and intensity of the third formant (emotional) leading to the “atypical” spectrogram of vowels are higher than corresponding parameters in the speech of TD children. High values of vowel articulation index (VAI) are specific for ASD children’s speech signals. These acoustic features can be considered as diagnostic marker of autism. The ability of humans and automatic recognition of the psychoneurological state of children via their speech is determined.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Autism%20spectrum%20disorders" title="Autism spectrum disorders">Autism spectrum disorders</a>, <a href="https://publications.waset.org/search?q=biomarker%20of%20autism" title=" biomarker of autism"> biomarker of autism</a>, <a href="https://publications.waset.org/search?q=child%20speech" title=" child speech"> child speech</a>, <a href="https://publications.waset.org/search?q=voice%20features." title=" voice features."> voice features.</a> </p> <a href="https://publications.waset.org/10012604/voice-features-as-the-diagnostic-marker-of-autism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012604/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012604/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012604/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012604/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012604/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012604/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012604/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012604/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012604/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012604/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">619</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2262</span> A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Qianhua%20He">Qianhua He</a>, <a href="https://publications.waset.org/search?q=Weili%20Zhou"> Weili Zhou</a>, <a href="https://publications.waset.org/search?q=Aiwu%20Chen"> Aiwu Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20denoising" title="Speech denoising">Speech denoising</a>, <a href="https://publications.waset.org/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/search?q=K-singular%20value%20decomposition" title=" K-singular value decomposition"> K-singular value decomposition</a>, <a href="https://publications.waset.org/search?q=orthogonal%20matching%20pursuit." title=" orthogonal matching pursuit."> orthogonal matching pursuit.</a> </p> <a href="https://publications.waset.org/10008151/a-sparse-representation-speech-denoising-method-based-on-adapted-stopping-residue-error" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008151/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008151/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008151/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008151/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008151/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008151/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008151/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008151/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008151/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008151/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1014</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2261</span> Eisenhower’s Farewell Speech: Initial and Continuing Communication Effects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=B.%20Kuiper">B. Kuiper</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>When Dwight D. Eisenhower delivered his final Presidential speech in 1961, he was using the opportunity to bid farewell to America, but he was also trying to warn his fellow countrymen about deeper challenges threatening the country. In this analysis, Eisenhower’s speech is examined in light of the impact it had on American culture, communication concepts, and political ramifications. The paper initially highlights the previous literature on the speech, especially in light of its 50<sup>th </sup>anniversary, and reveals a man whose main concern was how the speech’s words would affect his beloved country. The painstaking approach to the wording of the speech to reveal the intent is key, particularly in light of analyzing the motivations according to “virtuous communication.” This philosophical construct indicates that Eisenhower’s Farewell Address was crafted carefully according to a departing President’s deepest values and concerns, concepts that he wanted to pass along to his successor, to his country, and even to the world.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Eisenhower" title="Eisenhower">Eisenhower</a>, <a href="https://publications.waset.org/search?q=mass%20communication" title=" mass communication"> mass communication</a>, <a href="https://publications.waset.org/search?q=political%20speech" title=" political speech"> political speech</a>, <a href="https://publications.waset.org/search?q=rhetoric." title=" rhetoric."> rhetoric.</a> </p> <a href="https://publications.waset.org/10004735/eisenhowers-farewell-speech-initial-and-continuing-communication-effects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004735/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004735/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004735/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004735/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004735/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004735/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004735/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004735/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004735/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004735/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004735.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1869</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2260</span> Hybrid Modeling Algorithm for Continuous Tamil Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Kalamani">M. Kalamani</a>, <a href="https://publications.waset.org/search?q=S.%20Valarmathy"> S. Valarmathy</a>, <a href="https://publications.waset.org/search?q=M.%20Krishnamoorthi"> M. Krishnamoorthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, Fuzzy C-Means clustering with Expectation Maximization-Gaussian Mixture Model based hybrid modeling algorithm is proposed for Continuous Tamil Speech Recognition. The speech sentences from various speakers are used for training and testing phase and objective measures are between the proposed and existing Continuous Speech Recognition algorithms. From the simulated results, it is observed that the proposed algorithm improves the recognition accuracy and F-measure up to 3% as compared to that of the existing algorithms for the speech signal from various speakers. In addition, it reduces the Word Error Rate, Error Rate and Error up to 4% as compared to that of the existing algorithms. In all aspects, the proposed hybrid modeling for Tamil speech recognition provides the significant improvements for speechto- text conversion in various applications.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20Segmentation" title="Speech Segmentation">Speech Segmentation</a>, <a href="https://publications.waset.org/search?q=Feature%20Extraction" title=" Feature Extraction"> Feature Extraction</a>, <a href="https://publications.waset.org/search?q=Clustering" title=" Clustering"> Clustering</a>, <a href="https://publications.waset.org/search?q=HMM" title=" HMM"> HMM</a>, <a href="https://publications.waset.org/search?q=EM-GMM" title=" EM-GMM"> EM-GMM</a>, <a href="https://publications.waset.org/search?q=CSR." title=" CSR."> CSR.</a> </p> <a href="https://publications.waset.org/10000174/hybrid-modeling-algorithm-for-continuous-tamil-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000174/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000174/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000174/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000174/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000174/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000174/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000174/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000174/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000174/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000174/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000174.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2139</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2259</span> Neural Network Based Speech to Text in Malay Language </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=H.%20F.%20A.%20Abdul%20Ghani">H. F. A. Abdul Ghani</a>, <a href="https://publications.waset.org/search?q=R.%20R.%20Porle"> R. R. Porle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Speech to text in Malay language is a system that converts Malay speech into text. The Malay language recognition system is still limited, thus, this paper aims to investigate the performance of ten Malay words obtained from the online Malay news. The methodology consists of three stages, which are preprocessing, feature extraction, and speech classification. In preprocessing stage, the speech samples are filtered using pre emphasis. After that, feature extraction method is applied to the samples using Mel Frequency Cepstrum Coefficient (MFCC). Lastly, speech classification is performed using Feedforward Neural Network (FFNN). The accuracy of the classification is further investigated based on the hidden layer size. From experimentation, the classifier with 40 hidden neurons shows the highest classification rate which is 94%. </p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Feed-Forward%20Neural%20Network" title="Feed-Forward Neural Network">Feed-Forward Neural Network</a>, <a href="https://publications.waset.org/search?q=FFNN" title=" FFNN"> FFNN</a>, <a href="https://publications.waset.org/search?q=Malay%20speech%20recognition" title=" Malay speech recognition"> Malay speech recognition</a>, <a href="https://publications.waset.org/search?q=Mel%20Frequency%20Cepstrum%20Coefficient" title=" Mel Frequency Cepstrum Coefficient"> Mel Frequency Cepstrum Coefficient</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=speech-to-text." title=" speech-to-text."> speech-to-text.</a> </p> <a href="https://publications.waset.org/10010905/neural-network-based-speech-to-text-in-malay-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10010905/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10010905/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10010905/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10010905/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10010905/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10010905/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10010905/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10010905/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10010905/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10010905/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10010905.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">746</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2258</span> On the Effectivity of Different Pseudo-Noise and Orthogonal Sequences for Speech Encryption from Correlation Properties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=V.%20Anil%20Kumar">V. Anil Kumar</a>, <a href="https://publications.waset.org/search?q=Abhijit%20Mitra"> Abhijit Mitra</a>, <a href="https://publications.waset.org/search?q=S.%20R.%20Mahadeva%20Prasanna"> S. R. Mahadeva Prasanna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>We analyze the effectivity of different pseudo noise (PN) and orthogonal sequences for encrypting speech signals in terms of perceptual intelligence. Speech signal can be viewed as sequence of correlated samples and each sample as sequence of bits. The residual intelligibility of the speech signal can be reduced by removing the correlation among the speech samples. PN sequences have random like properties that help in reducing the correlation among speech samples. The mean square aperiodic auto-correlation (MSAAC) and the mean square aperiodic cross-correlation (MSACC) measures are used to test the randomness of the PN sequences. Results of the investigation show the effectivity of large Kasami sequences for this purpose among many PN sequences.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20encryption" title="Speech encryption">Speech encryption</a>, <a href="https://publications.waset.org/search?q=pseudo-noise%20codes" title=" pseudo-noise codes"> pseudo-noise codes</a>, <a href="https://publications.waset.org/search?q=maximallength" title=" maximallength"> maximallength</a>, <a href="https://publications.waset.org/search?q=Gold" title=" Gold"> Gold</a>, <a href="https://publications.waset.org/search?q=Barker" title=" Barker"> Barker</a>, <a href="https://publications.waset.org/search?q=Kasami" title=" Kasami"> Kasami</a>, <a href="https://publications.waset.org/search?q=Walsh-Hadamard" title=" Walsh-Hadamard"> Walsh-Hadamard</a>, <a href="https://publications.waset.org/search?q=autocorrelation" title=" autocorrelation"> autocorrelation</a>, <a href="https://publications.waset.org/search?q=crosscorrelation" title="crosscorrelation">crosscorrelation</a>, <a href="https://publications.waset.org/search?q=figure%20of%20merit." title=" figure of merit."> figure of merit.</a> </p> <a href="https://publications.waset.org/6316/on-the-effectivity-of-different-pseudo-noise-and-orthogonal-sequences-for-speech-encryption-from-correlation-properties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6316/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6316/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6316/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6316/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6316/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6316/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6316/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6316/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6316/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6316/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6316.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2041</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=76">76</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=77">77</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Speech%20Production&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>