CINXE.COM

Search results for: speech analysis

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: speech analysis</title> <meta name="description" content="Search results for: speech analysis"> <meta name="keywords" content="speech analysis"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="speech analysis" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="speech analysis"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 28304</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: speech analysis</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28304</span> An Analysis of Illocutioary Act in Martin Luther King Jr.&#039;s Propaganda Speech Entitled &#039;I Have a Dream&#039;</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahgfirah%20Firdaus%20Soberatta">Mahgfirah Firdaus Soberatta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language cannot be separated from human life. Humans use language to convey ideas, thoughts, and feelings. We can use words for different things for example like asserted, advising, promise, give opinions, hopes, etc. Propaganda is an attempt which seeks to obtain stable behavior to adopt everyone to his everyday life. It also controls the thoughts and attitudes of individuals in social settings permanent. In this research, the writer will discuss about the speech act in a propaganda speech delivered by Martin Luther King Jr. in Washington at Lincoln Memorial on August 28, 1963. 'I Have a Dream' is a public speech delivered by American civil rights activist MLK, he calls from an end to racism in USA. In this research, the writer uses Searle theory to analyze the types of illocutionary speech act that used by Martin Luther King Jr. in his propaganda speech. In this research, the writer uses a qualitative method described in descriptive, because the research wants to describe and explain the types of illocutionary speech acts used by Martin Luther King Jr. in his propaganda speech. The findings indicate that there are five types of speech acts in Martin Luther King Jr. speech. MLK also used direct speech and indirect speech in his propaganda speech. However, direct speech is the dominant speech act that MLK used in his propaganda speech. It is hoped that this research is useful for the readers to enrich their knowledge in a particular field of pragmatic speech acts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20act" title="speech act">speech act</a>, <a href="https://publications.waset.org/abstracts/search?q=propaganda" title=" propaganda"> propaganda</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Luther%20King%20Jr." title=" Martin Luther King Jr."> Martin Luther King Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a> </p> <a href="https://publications.waset.org/abstracts/45649/an-analysis-of-illocutioary-act-in-martin-luther-king-jrs-propaganda-speech-entitled-i-have-a-dream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">441</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28303</span> The Online Advertising Speech that Effect to the Thailand Internet User Decision Making</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Panprae%20Bunyapukkna">Panprae Bunyapukkna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated figures of speech used in fragrance advertising captions on the Internet. The objectives of the study were to find out the frequencies of figures of speech in fragrance advertising captions and the types of figures of speech most commonly applied in captions. The relation between figures of speech and fragrance was also examined in order to analyze how figures of speech were used to represent fragrance. Thirty-five fragrance advertisements were randomly selected from the Internet. Content analysis was applied in order to consider the relation between figures of speech and fragrance. The results showed that figures of speech were found in almost every fragrance advertisement except one advertisement of Lancôme. Thirty-four fragrance advertising captions used at least one kind of figure of speech. Metaphor was most frequently found and also most frequently applied in fragrance advertising captions, followed by alliteration, rhyme, simile and personification, and hyperbole respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advertising%20speech" title="advertising speech">advertising speech</a>, <a href="https://publications.waset.org/abstracts/search?q=fragrance%20advertisements" title=" fragrance advertisements"> fragrance advertisements</a>, <a href="https://publications.waset.org/abstracts/search?q=figures%20of%20speech" title=" figures of speech"> figures of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor" title=" metaphor"> metaphor</a> </p> <a href="https://publications.waset.org/abstracts/44259/the-online-advertising-speech-that-effect-to-the-thailand-internet-user-decision-making" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28302</span> Development of Non-Intrusive Speech Evaluation Measure Using S-Transform and Light-Gbm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tusar%20Kanti%20Dash">Tusar Kanti Dash</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganapati%20Panda"> Ganapati Panda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The evaluation of speech quality and intelligence is critical to the overall effectiveness of the Speech Enhancement Algorithms. Several intrusive and non-intrusive measures are employed to calculate these parameters. Non-Intrusive Evaluation is most challenging as, very often, the reference clean speech data is not available. In this paper, a novel non-intrusive speech evaluation measure is proposed using audio features derived from the Stockwell transform. These features are used with the Light Gradient Boosting Machine for the effective prediction of speech quality and intelligibility. The proposed model is analyzed using noisy and reverberant speech from four databases, and the results are compared with the standard Intrusive Evaluation Measures. It is observed from the comparative analysis that the proposed model is performing better than the standard Non-Intrusive models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=non-Intrusive%20speech%20evaluation" title="non-Intrusive speech evaluation">non-Intrusive speech evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=S-transform" title=" S-transform"> S-transform</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20GBM" title=" light GBM"> light GBM</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20quality" title=" speech quality"> speech quality</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20intelligibility" title=" and intelligibility"> and intelligibility</a> </p> <a href="https://publications.waset.org/abstracts/139626/development-of-non-intrusive-speech-evaluation-measure-using-s-transform-and-light-gbm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139626.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28301</span> Robust Noisy Speech Identification Using Frame Classifier Derived Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Punnoose%20A.%20K.">Punnoose A. K.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for identifying noisy speech recording using a multi-layer perception (MLP) trained to predict phonemes from acoustic features. Characteristics of the MLP posteriors are explored for clean speech and noisy speech at the frame level. Appropriate density functions are used to fit the softmax probability of the clean and noisy speech. A function that takes into account the ratio of the softmax probability density of noisy speech to clean speech is formulated. These phoneme independent scoring is weighted using a phoneme-specific weightage to make the scoring more robust. Simple thresholding is used to identify the noisy speech recording from the clean speech recordings. The approach is benchmarked on standard databases, with a focus on precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=noisy%20speech%20identification" title="noisy speech identification">noisy speech identification</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20pre-processing" title=" speech pre-processing"> speech pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20robustness" title=" noise robustness"> noise robustness</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20engineering" title=" feature engineering"> feature engineering</a> </p> <a href="https://publications.waset.org/abstracts/144694/robust-noisy-speech-identification-using-frame-classifier-derived-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144694.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28300</span> Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Nhan%20Nguyen">Van Nhan Nguyen</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20Holone"> Harald Holone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=asr" title=" asr"> asr</a>, <a href="https://publications.waset.org/abstracts/search?q=air%20traffic%20control" title=" air traffic control"> air traffic control</a>, <a href="https://publications.waset.org/abstracts/search?q=atc" title=" atc"> atc</a> </p> <a href="https://publications.waset.org/abstracts/31004/possibilities-challenges-and-the-state-of-the-art-of-automatic-speech-recognition-in-air-traffic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28299</span> Performance Evaluation of Acoustic-Spectrographic Voice Identification Method in Native and Non-Native Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Krasnova">E. Krasnova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Shchemelinin"> V. Shchemelinin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with acoustic-spectrographic voice identification method in terms of its performance in non-native language speech. Performance evaluation is conducted by comparing the result of the analysis of recordings containing native language speech with recordings that contain foreign language speech. Our research is based on Tajik and Russian speech of Tajik native speakers due to the character of the criminal situation with drug trafficking. We propose a pilot experiment that represents a primary attempt enter the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title="speaker identification">speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic-spectrographic%20method" title=" acoustic-spectrographic method"> acoustic-spectrographic method</a>, <a href="https://publications.waset.org/abstracts/search?q=non-native%20speech" title=" non-native speech"> non-native speech</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20evaluation" title=" performance evaluation"> performance evaluation</a> </p> <a href="https://publications.waset.org/abstracts/12496/performance-evaluation-of-acoustic-spectrographic-voice-identification-method-in-native-and-non-native-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28298</span> Automatic Segmentation of the Clean Speech Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Ben%20Messaoud">M. A. Ben Messaoud</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Bouzid"> A. Bouzid</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Ellouze"> N. Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The multi-scale product is based on making the product of the speech wavelet transform coefficients at three successive dyadic scales. We have evaluated our method on the Keele database. Experimental results show the effectiveness of our method presenting a good performance. It shows that the two simple features can find word boundaries, and extracted the segments of the clean speech. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiscale%20product" title="multiscale product">multiscale product</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20centroid" title=" spectral centroid"> spectral centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20segmentation" title=" speech segmentation"> speech segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=zero%20crossings%20rate" title=" zero crossings rate"> zero crossings rate</a> </p> <a href="https://publications.waset.org/abstracts/17566/automatic-segmentation-of-the-clean-speech-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">499</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28297</span> TeleMe Speech Booster: Web-Based Speech Therapy and Training Program for Children with Articulation Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Treerattanaphan">C. Treerattanaphan</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Boonpramuk"> P. Boonpramuk</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Singla"> P. Singla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Frequent, continuous speech training has proven to be a necessary part of a successful speech therapy process, but constraints of traveling time and employment dispensation become key obstacles especially for individuals living in remote areas or for dependent children who have working parents. In order to ameliorate speech difficulties with ample guidance from speech therapists, a website has been developed that supports speech therapy and training for people with articulation disorders in the standard Thai language. This web-based program has the ability to record speech training exercises for each speech trainee. The records will be stored in a database for the speech therapist to investigate, evaluate, compare and keep track of all trainees’ progress in detail. Speech trainees can request live discussions via video conference call when needed. Communication through this web-based program facilitates and reduces training time in comparison to walk-in training or appointments. This type of training also allows people with articulation disorders to practice speech lessons whenever or wherever is convenient for them, which can lead to a more regular training processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=web-based%20remote%20training%20program" title="web-based remote training program">web-based remote training program</a>, <a href="https://publications.waset.org/abstracts/search?q=Thai%20speech%20therapy" title=" Thai speech therapy"> Thai speech therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=articulation%20disorders" title=" articulation disorders"> articulation disorders</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20booster" title=" speech booster"> speech booster</a> </p> <a href="https://publications.waset.org/abstracts/13916/teleme-speech-booster-web-based-speech-therapy-and-training-program-for-children-with-articulation-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28296</span> Eisenhower’s Farewell Speech: Initial and Continuing Communication Effects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Kuiper">B. Kuiper</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When Dwight D. Eisenhower delivered his final Presidential speech in 1961, he was using the opportunity to bid farewell to America, but he was also trying to warn his fellow countrymen about deeper challenges threatening the country. In this analysis, Eisenhower&rsquo;s speech is examined in light of the impact it had on American culture, communication concepts, and political ramifications. The paper initially highlights the previous literature on the speech, especially in light of its 50<sup>th </sup>anniversary, and reveals a man whose main concern was how the speech&rsquo;s words would affect his beloved country. The painstaking approach to the wording of the speech to reveal the intent is key, particularly in light of analyzing the motivations according to &ldquo;virtuous communication.&rdquo; This philosophical construct indicates that Eisenhower&rsquo;s Farewell Address was crafted carefully according to a departing President&rsquo;s deepest values and concerns, concepts that he wanted to pass along to his successor, to his country, and even to the world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eisenhower" title="Eisenhower">Eisenhower</a>, <a href="https://publications.waset.org/abstracts/search?q=mass%20communication" title=" mass communication"> mass communication</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20speech" title=" political speech"> political speech</a>, <a href="https://publications.waset.org/abstracts/search?q=rhetoric" title=" rhetoric"> rhetoric</a> </p> <a href="https://publications.waset.org/abstracts/50004/eisenhowers-farewell-speech-initial-and-continuing-communication-effects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28295</span> Annexation (Al-Iḍāfah) in Thariq bin Ziyad’s Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Annisa%20D.%20Febryandini">Annisa D. Febryandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Annexation is a typical construction that commonly used in Arabic language. The use of the construction appears in Arabic speech such as the speech of Thariq bin Ziyad. The speech as one of the most famous speeches in the history of Islam uses many annexations. This qualitative research paper uses the secondary data by library method. Based on the data, this paper concludes that the speech has two basic structures with some variations and has some grammatical relationship. Different from the other researches that identify the speech in sociology field, the speech in this paper will be analyzed in linguistic field to take a look at the structure of its annexation as well as the grammatical relationship. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annexation" title="annexation">annexation</a>, <a href="https://publications.waset.org/abstracts/search?q=Thariq%20bin%20Ziyad" title=" Thariq bin Ziyad"> Thariq bin Ziyad</a>, <a href="https://publications.waset.org/abstracts/search?q=grammatical%20relationship" title=" grammatical relationship"> grammatical relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20syntax" title=" Arabic syntax"> Arabic syntax</a> </p> <a href="https://publications.waset.org/abstracts/72847/annexation-al-iafah-in-thariq-bin-ziyads-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28294</span> Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hai%20Quang%20Hong%20Dam">Hai Quang Hong Dam</a>, <a href="https://publications.waset.org/abstracts/search?q=Hai%20Ho"> Hai Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Minh%20Hoang%20Le%20Ngo"> Minh Hoang Le Ngo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20speech%20separation" title="blind speech separation">blind speech separation</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detector" title=" voice activity detector"> voice activity detector</a>, <a href="https://publications.waset.org/abstracts/search?q=SRP-PHAT" title=" SRP-PHAT"> SRP-PHAT</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20beamformer" title=" optimal beamformer"> optimal beamformer</a> </p> <a href="https://publications.waset.org/abstracts/53263/blind-speech-separation-using-srp-phat-localization-and-optimal-beamformer-in-two-speaker-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28293</span> A Cross-Dialect Statistical Analysis of Final Declarative Intonation in Tuvinian</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Beziakina">D. Beziakina</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study continues the research on Tuvinian intonation and presents a general cross-dialect analysis of intonation of Tuvinian declarative utterances, specifically the character of the tone movement in order to test the hypothesis about the prevalence of level tone in some Tuvinian dialects. The results of the analysis of basic pitch characteristics of Tuvinian speech (in general and in comparison with two other Turkic languages - Uzbek and Azerbaijani) are also given in this paper. The goal of our work was to obtain the ranges of pitch parameter values typical for Tuvinian speech. Such language-specific values can be used in speaker identification systems in order to get more accurate results of ethnic speech analysis. We also present the results of a cross-dialect analysis of declarative intonation in the poorly studied Tuvinian language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20analysis" title="speech analysis">speech analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification%20of%20person" title=" identification of person"> identification of person</a> </p> <a href="https://publications.waset.org/abstracts/12497/a-cross-dialect-statistical-analysis-of-final-declarative-intonation-in-tuvinian" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28292</span> The Philippines’ War on Drugs: a Pragmatic Analysis on Duterte&#039;s Commemorative Speeches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ericson%20O.%20Alieto">Ericson O. Alieto</a>, <a href="https://publications.waset.org/abstracts/search?q=Aprillete%20C.%20Devanadera"> Aprillete C. Devanadera</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main objective of the study is to determine the dominant speech acts in five commemorative speeches of President Duterte. This study employed Speech Act Theory and Discourse analysis to determine how the speech acts features connote the pragmatic meaning of Duterte’s speeches. Identifying the speech acts is significant in elucidating the underlying message or the pragmatic meaning of the speeches. From the 713 sentences or utterances from the speeches, assertive with 208 occurrences from the corpus or 29% is the dominant speech acts. It was followed by expressive with 177 or 25% occurrences, directive accounts for 152 or 15% occurrences. While commisive accounts for 104 or 15% occurrences and declarative got the lowest percentage of occurrences with 72 or 10% only. These sentences when uttered by Duterte carry a certain power of language to move or influence people. Thus, the present study shows the fundamental message perceived by the listeners. Moreover, the frequent use of assertive and expressive not only explains the pragmatic message of the speeches but also reflects the personality of President Duterte. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=commemorative%20speech" title="commemorative speech">commemorative speech</a>, <a href="https://publications.waset.org/abstracts/search?q=discourse%20analysis" title=" discourse analysis"> discourse analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=duterte" title=" duterte"> duterte</a>, <a href="https://publications.waset.org/abstracts/search?q=pragmatics" title=" pragmatics"> pragmatics</a> </p> <a href="https://publications.waset.org/abstracts/89105/the-philippines-war-on-drugs-a-pragmatic-analysis-on-dutertes-commemorative-speeches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89105.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">287</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28291</span> Speech Acts and Politeness Strategies in an EFL Classroom in Georgia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tinatin%20Kurdghelashvili">Tinatin Kurdghelashvili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with the usage of speech acts and politeness strategies in an EFL classroom in Georgia (Rep of). It explores the students’ and the teachers’ practice of the politeness strategies and the speech acts of apology, thanking, request, compliment/encouragement, command, agreeing/disagreeing, addressing and code switching. The research method includes observation as well as a questionnaire. The target group involves the students from Georgian public schools and two certified, experienced local English teachers. The analysis is based on Searle’s Speech Act Theory and Brown and Levinson’s politeness strategies. The findings show that the students have certain knowledge regarding politeness yet they fail to apply them in English communication. In addition, most of the speech acts from the classroom interaction are used by the teachers and not the students. Thereby, it is suggested that teachers should cultivate the students’ communicative competence and attempt to give them opportunities to practice more English speech acts than they do today. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=english%20as%20a%20foreign%20language" title="english as a foreign language">english as a foreign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Georgia" title=" Georgia"> Georgia</a>, <a href="https://publications.waset.org/abstracts/search?q=politeness%20principles" title=" politeness principles"> politeness principles</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20acts" title=" speech acts"> speech acts</a> </p> <a href="https://publications.waset.org/abstracts/17320/speech-acts-and-politeness-strategies-in-an-efl-classroom-in-georgia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17320.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">636</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28290</span> The Influence of Advertising Captions on the Internet through the Consumer Purchasing Decision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Suwimol%20Apapol">Suwimol Apapol</a>, <a href="https://publications.waset.org/abstracts/search?q=Punrapha%20Praditpong"> Punrapha Praditpong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives of the study were to find out the frequencies of figures of speech in fragrance advertising captions as well as the types of figures of speech most commonly applied in captions. The relation between figures of speech and fragrance was also examined in order to analyze how figures of speech were used to represent fragrance. Thirty-five fragrance advertisements were randomly selected from the Internet. Content analysis was applied in order to consider the relation between figures of speech and fragrance. The results showed that figures of speech were found in almost every fragrance advertisement except one advertisement of several Goods service. Thirty-four fragrance advertising captions used at least one kind of figure of speech. Metaphor was most frequently found and also most frequently applied in fragrance advertising captions, followed by alliteration, rhyme, simile and personification, and hyperbole respectively which is in harmony with the research hypotheses as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advertising%20captions" title="advertising captions">advertising captions</a>, <a href="https://publications.waset.org/abstracts/search?q=captions%20on%20internet" title=" captions on internet"> captions on internet</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20purchasing%20decision" title=" consumer purchasing decision"> consumer purchasing decision</a>, <a href="https://publications.waset.org/abstracts/search?q=e-commerce" title=" e-commerce"> e-commerce</a> </p> <a href="https://publications.waset.org/abstracts/39966/the-influence-of-advertising-captions-on-the-internet-through-the-consumer-purchasing-decision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39966.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">270</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28289</span> An Algorithm Based on the Nonlinear Filter Generator for Speech Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Belmeguenai">A. Belmeguenai</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Mansouri"> K. Mansouri</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Djemili"> R. Djemili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work present a new algorithm based on the nonlinear filter generator for speech encryption and decryption. The proposed algorithm consists on the use a linear feedback shift register (LFSR) whose polynomial is primitive and nonlinear Boolean function. The purpose of this system is to construct Keystream with good statistical properties, but also easily computable on a machine with limited capacity calculated. This proposed speech encryption scheme is very simple, highly efficient, and fast to implement the speech encryption and decryption. We conclude the paper by showing that this system can resist certain known attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20filter%20generator" title="nonlinear filter generator">nonlinear filter generator</a>, <a href="https://publications.waset.org/abstracts/search?q=stream%20ciphers" title=" stream ciphers"> stream ciphers</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20encryption" title=" speech encryption"> speech encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20analysis" title=" security analysis"> security analysis</a> </p> <a href="https://publications.waset.org/abstracts/39095/an-algorithm-based-on-the-nonlinear-filter-generator-for-speech-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39095.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28288</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Shoiynbek">A. Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Kozhakhmet"> K. Kozhakhmet</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Menezes"> P. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Kuanyshbay"> D. Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Bayazitov"> D. Bayazitov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/152814/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152814.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28287</span> Speech Impact Realization via Manipulative Argumentation Techniques in Modern American Political Discourse</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zarine%20Avetisyan">Zarine Avetisyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Paper presents the discussion of scholars concerning speech impact, peculiarities of its realization, speech strategies, and techniques. Departing from the viewpoints of many prominent linguists, the paper suggests manipulative argumentation be viewed as a most pervasive speech strategy with a certain set of techniques which are to be found in modern American political discourse. The precedence of their occurrence allows us to regard them as pragmatic patterns of speech impact realization in effective public speaking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20impact" title="speech impact">speech impact</a>, <a href="https://publications.waset.org/abstracts/search?q=manipulative%20argumentation" title=" manipulative argumentation"> manipulative argumentation</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20discourse" title=" political discourse"> political discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=technique" title=" technique"> technique</a> </p> <a href="https://publications.waset.org/abstracts/31058/speech-impact-realization-via-manipulative-argumentation-techniques-in-modern-american-political-discourse" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">508</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28286</span> Speech Enhancement Using Kalman Filter in Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eng.%20Alaa%20K.%20Satti%20Salih">Eng. Alaa K. Satti Salih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autoregressive%20process" title="autoregressive process">autoregressive process</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20speech" title=" noise speech"> noise speech</a> </p> <a href="https://publications.waset.org/abstracts/7182/speech-enhancement-using-kalman-filter-in-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28285</span> Systemic Functional Grammar Analysis of Barack Obama&#039;s Second Term Inaugural Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadiq%20Aminu">Sadiq Aminu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Lamido"> Ahmed Lamido</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research studies Barack Obama’s second inaugural speech using Halliday’s Systemic Functional Grammar (SFG). SFG is a text grammar which describes how language is used, so that the meaning of the text can be better understood. The primary source of data in this research work is Barack Obama’s second inaugural speech which was obtained from the internet. The analysis of the speech was based on the ideational and textual metafunctions of Systemic Functional Grammar. Specifically, the researcher analyses the Process Types and Participants (ideational) and the Theme/Rheme (textual). It was found that material process (process of doing) was the most frequently used ‘Process type’ and ‘We’ which refers to the people of America was the frequently used ‘Theme’. Application of the SFG theory, therefore, gives a better meaning to Barack Obama’s speech. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ideational" title="ideational">ideational</a>, <a href="https://publications.waset.org/abstracts/search?q=metafunction" title=" metafunction"> metafunction</a>, <a href="https://publications.waset.org/abstracts/search?q=rheme" title=" rheme"> rheme</a>, <a href="https://publications.waset.org/abstracts/search?q=textual" title=" textual"> textual</a>, <a href="https://publications.waset.org/abstracts/search?q=theme" title=" theme"> theme</a> </p> <a href="https://publications.waset.org/abstracts/89617/systemic-functional-grammar-analysis-of-barack-obamas-second-term-inaugural-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89617.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28284</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisultan%20Shoiynbek">Aisultan Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=Darkhan%20Kuanyshbay"> Darkhan Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Menezes"> Paulo Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Akbayan%20Bekarystankyzy"> Akbayan Bekarystankyzy</a>, <a href="https://publications.waset.org/abstracts/search?q=Assylbek%20Mukhametzhanov"> Assylbek Mukhametzhanov</a>, <a href="https://publications.waset.org/abstracts/search?q=Temirlan%20Shoiynbek"> Temirlan Shoiynbek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/189328/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189328.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28283</span> Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Ajgou">R. Ajgou</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sbaa"> S. Sbaa</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghendir"> S. Ghendir</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Chemsa"> A. Chemsa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Taleb-Ahmed"> A. Taleb-Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title="speech enhancement">speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=pesq" title=" pesq"> pesq</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a> </p> <a href="https://publications.waset.org/abstracts/31102/comparative-methods-for-speech-enhancement-and-the-effects-on-text-independent-speaker-identification-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28282</span> Speech Enhancement Using Wavelet Coefficients Masking with Local Binary Patterns</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christian%20Arcos">Christian Arcos</a>, <a href="https://publications.waset.org/abstracts/search?q=Marley%20Vellasco"> Marley Vellasco</a>, <a href="https://publications.waset.org/abstracts/search?q=Abraham%20Alcaim"> Abraham Alcaim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a wavelet coefficients masking based on Local Binary Patterns (WLBP) approach to enhance the temporal spectra of the wavelet coefficients for speech enhancement. This technique exploits the wavelet denoising scheme, which splits the degraded speech into pyramidal subband components and extracts frequency information without losing temporal information. Speech enhancement in each high-frequency subband is performed by binary labels through the local binary pattern masking that encodes the ratio between the original value of each coefficient and the values of the neighbour coefficients. This approach enhances the high-frequency spectra of the wavelet transform instead of eliminating them through a threshold. A comparative analysis is carried out with conventional speech enhancement algorithms, demonstrating that the proposed technique achieves significant improvements in terms of PESQ, an international recommendation of objective measure for estimating subjective speech quality. Informal listening tests also show that the proposed method in an acoustic context improves the quality of speech, avoiding the annoying musical noise present in other speech enhancement techniques. Experimental results obtained with a DNN based speech recognizer in noisy environments corroborate the superiority of the proposed scheme in the robust speech recognition scenario. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binary%20labels" title="binary labels">binary labels</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20patterns" title=" local binary patterns"> local binary patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=mask" title=" mask"> mask</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20coefficients" title=" wavelet coefficients"> wavelet coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title=" speech enhancement"> speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a> </p> <a href="https://publications.waset.org/abstracts/79985/speech-enhancement-using-wavelet-coefficients-masking-with-local-binary-patterns" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79985.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">229</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28281</span> Freedom of Speech and Involvement in Hatred Speech on Social Media Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sara%20Chinnasamy">Sara Chinnasamy</a>, <a href="https://publications.waset.org/abstracts/search?q=Michelle%20Gun"> Michelle Gun</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Adnan%20Hashim"> M. Adnan Hashim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Federal Constitution guarantees Malaysians the right to free speech and expression; yet hatred speech can be commonly found on social media platforms such as Facebook, Twitter, and Instagram. In Malaysia social media sphere, most hatred speech involves religion, race and politics. Recent cases of racial attacks on social media have created social tensions among Malaysians. Many Malaysians always argue on their rights to freedom of speech. However, there are laws that limit their expression to the public and protecting social media users from being a victim of hate speech. This paper aims to explore the attitude and involvement of Malaysian netizens towards freedom of speech and hatred speech on social media. It also examines the relationship between involvement in hatred speech among Malaysian netizens and attitude towards freedom of speech. For most Malaysians, practicing total freedom of speech in the open is unthinkable. As a result, the best channel to articulate their feelings and opinions liberally is the internet. With the advent of the internet medium, more and more Malaysians are conveying their viewpoints using the various internet channels although sensitivity of the audience is seldom taken into account. Consequently, this situation has led to pockets of social disharmony among the citizens. Although this unhealthy activity is denounced by the authority, netizens are generally of the view that they have the right to write anything they want. Using the quantitative method, survey was conducted among Malaysians aged between 18 and 50 years who are active social media users. Results from the survey reveal that despite a weak relationship level between hatred speech involvement on social media and attitude towards freedom of speech, the association is still considerably significant. As such, it can be safely presumed that hatred speech on social media occurs due to the freedom of speech that exists by way of social media channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20speech" title="freedom of speech">freedom of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=hatred%20speech" title=" hatred speech"> hatred speech</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=Malaysia" title=" Malaysia"> Malaysia</a>, <a href="https://publications.waset.org/abstracts/search?q=netizens" title=" netizens"> netizens</a> </p> <a href="https://publications.waset.org/abstracts/72863/freedom-of-speech-and-involvement-in-hatred-speech-on-social-media-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28280</span> Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nazaket%20Gazieva">Nazaket Gazieva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phonogram" title="phonogram">phonogram</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20characteristics" title=" temporal characteristics"> temporal characteristics</a>, <a href="https://publications.waset.org/abstracts/search?q=fundamental%20frequency" title=" fundamental frequency"> fundamental frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=biometric%20fingerprints" title=" biometric fingerprints"> biometric fingerprints</a> </p> <a href="https://publications.waset.org/abstracts/110332/minimum-data-of-a-speech-signal-as-special-indicators-of-identification-in-phonoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28279</span> Intervention of Self-Limiting L1 Inner Speech during L2 Presentations: A Study of Bangla-English Bilinguals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdul%20Wahid">Abdul Wahid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Inner speech, also known as verbal thinking, self-talk or private speech, is characterized by the subjective language experience in the absence of overt or audible speech. It is a psychological form of verbal activity which is being rehearsed without the articulation of any sound wave. In Psychology, self-limiting speech means the type of speech which contains information that inhibits the development of the self. People, in most cases, experience inner speech in their first language. It is very frequent in Bangladesh where the Bangla (L1) speaking students lose track of speech during their presentations in English (L2). This paper investigates into the long pauses (more than 0.4 seconds long) in English (L2) presentations by Bangla speaking students (18-21 year old) and finds the intervention of Bangla (L1) inner speech as one of its causes. The overt speeches of the presenters are placed on Audacity Audio Editing software where the length of pauses are measured in milliseconds. Varieties of inner speech questionnaire (VISQ) have been conducted randomly amongst the participants out of whom 20 were selected who have similar phenomenology of inner speech. They have been interviewed to describe the type and content of the voices that went on in their head during the long pauses. The qualitative interview data are then codified and converted into quantitative data. It was observed that in more than 80% cases students experience self-limiting inner speech/self-talk during their unwanted pauses in L2 presentations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bangla-English%20Bilinguals" title="Bangla-English Bilinguals">Bangla-English Bilinguals</a>, <a href="https://publications.waset.org/abstracts/search?q=inner%20speech" title=" inner speech"> inner speech</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20intervention%20in%20bilingualism" title=" L1 intervention in bilingualism"> L1 intervention in bilingualism</a>, <a href="https://publications.waset.org/abstracts/search?q=motor%20schema" title=" motor schema"> motor schema</a>, <a href="https://publications.waset.org/abstracts/search?q=pauses" title=" pauses"> pauses</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20loop" title=" phonological loop"> phonological loop</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20store" title=" phonological store"> phonological store</a>, <a href="https://publications.waset.org/abstracts/search?q=working%20memory" title=" working memory"> working memory</a> </p> <a href="https://publications.waset.org/abstracts/128980/intervention-of-self-limiting-l1-inner-speech-during-l2-presentations-a-study-of-bangla-english-bilinguals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28278</span> Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indira%20Kothalanka">Indira Kothalanka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cleft%20palate" title="cleft palate">cleft palate</a>, <a href="https://publications.waset.org/abstracts/search?q=compensatory%20articulation" title=" compensatory articulation"> compensatory articulation</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrographic%20analysis" title=" spectrographic analysis"> spectrographic analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=PRAAT" title=" PRAAT"> PRAAT</a> </p> <a href="https://publications.waset.org/abstracts/31534/compensatory-articulation-of-pressure-consonants-in-telugu-cleft-palate-speech-a-spectrographic-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31534.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">443</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28277</span> The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fawaz%20S.%20Al-Anzi">Fawaz S. Al-Anzi</a>, <a href="https://publications.waset.org/abstracts/search?q=Dia%20AbuZeina"> Dia AbuZeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title="speech recognition">speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20features" title=" acoustic features"> acoustic features</a>, <a href="https://publications.waset.org/abstracts/search?q=mel%20frequency" title=" mel frequency"> mel frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20coefficients" title=" cepstral coefficients"> cepstral coefficients</a> </p> <a href="https://publications.waset.org/abstracts/78382/the-capacity-of-mel-frequency-cepstral-coefficients-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28276</span> A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qianhua%20He">Qianhua He</a>, <a href="https://publications.waset.org/abstracts/search?q=Weili%20Zhou"> Weili Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Aiwu%20Chen"> Aiwu Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20denoising" title="speech denoising">speech denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=k-singular%20value%20decomposition" title=" k-singular value decomposition"> k-singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20matching%20pursuit" title=" orthogonal matching pursuit"> orthogonal matching pursuit</a> </p> <a href="https://publications.waset.org/abstracts/66670/a-sparse-representation-speech-denoising-method-based-on-adapted-stopping-residue-error" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">499</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28275</span> Status of Communication and Swallowing Therapy in Patient with a Tracheostomy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ya-Hui%20Wang">Ya-Hui Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lower speech therapy rate of tracheostomized patient was noted in comparison with previous researches. This study is aim to shed light on the referral status of speech therapy in those patients in Taiwan. This study developed an analysis for the size and key characteristics of the population of tracheostomized in-patient in the Taiwan. Method: We analyzed National Healthcare Insurance data (The Collaboration Center of Health Information Application, CCHIA) from Jan 1 2010 to Dec 31 2010. Result: over ages 3, number of tracheostomized in-patient is directly proportional to age. A high service loading was observed in North region in comparison with other regions. Only 4.87% of the tracheostomized in-patients were referred for speech therapy, and 1.9% for swallow examination, 2.5% for communication evaluation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=refer" title="refer">refer</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20therapy" title=" speech therapy"> speech therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=training" title=" training"> training</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a> </p> <a href="https://publications.waset.org/abstracts/13231/status-of-communication-and-swallowing-therapy-in-patient-with-a-tracheostomy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13231.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=943">943</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=944">944</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speech%20analysis&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10