CINXE.COM

Search results for: delimitation of speech

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: delimitation of speech</title> <meta name="description" content="Search results for: delimitation of speech"> <meta name="keywords" content="delimitation of speech"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="delimitation of speech" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="delimitation of speech"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 803</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: delimitation of speech</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">803</span> Articles, Delimitation of Speech and Perception</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nataliya%20L.%20Ogurechnikova">Nataliya L. Ogurechnikova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper aims to clarify the function of articles in the English speech and specify their place and role in the English language, taking into account the use of articles for delimitation of speech. A focus of the paper is the use of the definite and the indefinite articles with different types of noun phrases which comprise either one noun with or without attributes, such as the King, the Queen, the Lion, the Unicorn, a dimple, a smile, a new language, an unknown dialect, or several nouns with or without attributes, such as the King and Queen of Hearts, the Lion and Unicorn, a dimple or smile, a completely isolated language or dialect. It is stated that the function of delimitation is related to perception: the number of speech units in a text correlates with the way the speaker perceives and segments the denotation. The two following combinations of words the house and garden and the house and the garden contain different numbers of speech units, one and two respectively, and reveal two different perception modes which correspond to the use of the definite article in the examples given. Thus, the function of delimitation is twofold, it is related to perception and cognition, on the one hand, and, on the other hand, to grammar, if the subject of grammar is the structure of speech. Analysis of speech units in the paper is not limited by noun phrases and is amplified by discussion of peripheral phenomena which are nevertheless important because they enable to qualify articles as a syntactic phenomenon whereas they are not infrequently described in terms of noun morphology. With this regard attention is given to the history of linguistic studies, specifically to the description of English articles by Niels Haislund, a disciple of Otto Jespersen. A discrepancy is noted between the initial plan of Jespersen who intended to describe articles as a syntactic phenomenon in ‘A Modern English Grammar on Historical Principles’ and the interpretation of articles in terms of noun morphology, finally given by Haislund. Another issue of the paper is correlation between description and denotation, being a traditional aspect of linguistic studies focused on articles. An overview of relevant studies, given in the paper, goes back to the works of G. Frege, which gave rise to a series of scientific works where the meaning of articles was described within the scope of logical semantics. Correlation between denotation and description is treated in the paper as the meaning of article, i.e. a component in its semantic structure, which differs from the function of delimitation and is similar to the meaning of other quantifiers. The paper further explains why the relation between description and denotation, i.e. the meaning of English article, is irrelevant for noun morphology and has nothing to do with nominal categories of the English language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech" title="delimitation of speech">delimitation of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=denotation" title=" denotation"> denotation</a>, <a href="https://publications.waset.org/abstracts/search?q=description" title=" description"> description</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20units" title=" speech units"> speech units</a>, <a href="https://publications.waset.org/abstracts/search?q=syntax" title=" syntax"> syntax</a> </p> <a href="https://publications.waset.org/abstracts/68100/articles-delimitation-of-speech-and-perception" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68100.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">802</span> Robust Noisy Speech Identification Using Frame Classifier Derived Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Punnoose%20A.%20K.">Punnoose A. K.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for identifying noisy speech recording using a multi-layer perception (MLP) trained to predict phonemes from acoustic features. Characteristics of the MLP posteriors are explored for clean speech and noisy speech at the frame level. Appropriate density functions are used to fit the softmax probability of the clean and noisy speech. A function that takes into account the ratio of the softmax probability density of noisy speech to clean speech is formulated. These phoneme independent scoring is weighted using a phoneme-specific weightage to make the scoring more robust. Simple thresholding is used to identify the noisy speech recording from the clean speech recordings. The approach is benchmarked on standard databases, with a focus on precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=noisy%20speech%20identification" title="noisy speech identification">noisy speech identification</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20pre-processing" title=" speech pre-processing"> speech pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20robustness" title=" noise robustness"> noise robustness</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20engineering" title=" feature engineering"> feature engineering</a> </p> <a href="https://publications.waset.org/abstracts/144694/robust-noisy-speech-identification-using-frame-classifier-derived-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144694.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">801</span> An Analysis of Illocutioary Act in Martin Luther King Jr.&#039;s Propaganda Speech Entitled &#039;I Have a Dream&#039;</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahgfirah%20Firdaus%20Soberatta">Mahgfirah Firdaus Soberatta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language cannot be separated from human life. Humans use language to convey ideas, thoughts, and feelings. We can use words for different things for example like asserted, advising, promise, give opinions, hopes, etc. Propaganda is an attempt which seeks to obtain stable behavior to adopt everyone to his everyday life. It also controls the thoughts and attitudes of individuals in social settings permanent. In this research, the writer will discuss about the speech act in a propaganda speech delivered by Martin Luther King Jr. in Washington at Lincoln Memorial on August 28, 1963. 'I Have a Dream' is a public speech delivered by American civil rights activist MLK, he calls from an end to racism in USA. In this research, the writer uses Searle theory to analyze the types of illocutionary speech act that used by Martin Luther King Jr. in his propaganda speech. In this research, the writer uses a qualitative method described in descriptive, because the research wants to describe and explain the types of illocutionary speech acts used by Martin Luther King Jr. in his propaganda speech. The findings indicate that there are five types of speech acts in Martin Luther King Jr. speech. MLK also used direct speech and indirect speech in his propaganda speech. However, direct speech is the dominant speech act that MLK used in his propaganda speech. It is hoped that this research is useful for the readers to enrich their knowledge in a particular field of pragmatic speech acts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20act" title="speech act">speech act</a>, <a href="https://publications.waset.org/abstracts/search?q=propaganda" title=" propaganda"> propaganda</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Luther%20King%20Jr." title=" Martin Luther King Jr."> Martin Luther King Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a> </p> <a href="https://publications.waset.org/abstracts/45649/an-analysis-of-illocutioary-act-in-martin-luther-king-jrs-propaganda-speech-entitled-i-have-a-dream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">441</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">800</span> The Online Advertising Speech that Effect to the Thailand Internet User Decision Making</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Panprae%20Bunyapukkna">Panprae Bunyapukkna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated figures of speech used in fragrance advertising captions on the Internet. The objectives of the study were to find out the frequencies of figures of speech in fragrance advertising captions and the types of figures of speech most commonly applied in captions. The relation between figures of speech and fragrance was also examined in order to analyze how figures of speech were used to represent fragrance. Thirty-five fragrance advertisements were randomly selected from the Internet. Content analysis was applied in order to consider the relation between figures of speech and fragrance. The results showed that figures of speech were found in almost every fragrance advertisement except one advertisement of Lancôme. Thirty-four fragrance advertising captions used at least one kind of figure of speech. Metaphor was most frequently found and also most frequently applied in fragrance advertising captions, followed by alliteration, rhyme, simile and personification, and hyperbole respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advertising%20speech" title="advertising speech">advertising speech</a>, <a href="https://publications.waset.org/abstracts/search?q=fragrance%20advertisements" title=" fragrance advertisements"> fragrance advertisements</a>, <a href="https://publications.waset.org/abstracts/search?q=figures%20of%20speech" title=" figures of speech"> figures of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor" title=" metaphor"> metaphor</a> </p> <a href="https://publications.waset.org/abstracts/44259/the-online-advertising-speech-that-effect-to-the-thailand-internet-user-decision-making" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">799</span> TeleMe Speech Booster: Web-Based Speech Therapy and Training Program for Children with Articulation Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Treerattanaphan">C. Treerattanaphan</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Boonpramuk"> P. Boonpramuk</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Singla"> P. Singla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Frequent, continuous speech training has proven to be a necessary part of a successful speech therapy process, but constraints of traveling time and employment dispensation become key obstacles especially for individuals living in remote areas or for dependent children who have working parents. In order to ameliorate speech difficulties with ample guidance from speech therapists, a website has been developed that supports speech therapy and training for people with articulation disorders in the standard Thai language. This web-based program has the ability to record speech training exercises for each speech trainee. The records will be stored in a database for the speech therapist to investigate, evaluate, compare and keep track of all trainees’ progress in detail. Speech trainees can request live discussions via video conference call when needed. Communication through this web-based program facilitates and reduces training time in comparison to walk-in training or appointments. This type of training also allows people with articulation disorders to practice speech lessons whenever or wherever is convenient for them, which can lead to a more regular training processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=web-based%20remote%20training%20program" title="web-based remote training program">web-based remote training program</a>, <a href="https://publications.waset.org/abstracts/search?q=Thai%20speech%20therapy" title=" Thai speech therapy"> Thai speech therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=articulation%20disorders" title=" articulation disorders"> articulation disorders</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20booster" title=" speech booster"> speech booster</a> </p> <a href="https://publications.waset.org/abstracts/13916/teleme-speech-booster-web-based-speech-therapy-and-training-program-for-children-with-articulation-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">798</span> Development of Non-Intrusive Speech Evaluation Measure Using S-Transform and Light-Gbm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tusar%20Kanti%20Dash">Tusar Kanti Dash</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganapati%20Panda"> Ganapati Panda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The evaluation of speech quality and intelligence is critical to the overall effectiveness of the Speech Enhancement Algorithms. Several intrusive and non-intrusive measures are employed to calculate these parameters. Non-Intrusive Evaluation is most challenging as, very often, the reference clean speech data is not available. In this paper, a novel non-intrusive speech evaluation measure is proposed using audio features derived from the Stockwell transform. These features are used with the Light Gradient Boosting Machine for the effective prediction of speech quality and intelligibility. The proposed model is analyzed using noisy and reverberant speech from four databases, and the results are compared with the standard Intrusive Evaluation Measures. It is observed from the comparative analysis that the proposed model is performing better than the standard Non-Intrusive models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=non-Intrusive%20speech%20evaluation" title="non-Intrusive speech evaluation">non-Intrusive speech evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=S-transform" title=" S-transform"> S-transform</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20GBM" title=" light GBM"> light GBM</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20quality" title=" speech quality"> speech quality</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20intelligibility" title=" and intelligibility"> and intelligibility</a> </p> <a href="https://publications.waset.org/abstracts/139626/development-of-non-intrusive-speech-evaluation-measure-using-s-transform-and-light-gbm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139626.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">797</span> Annexation (Al-Iḍāfah) in Thariq bin Ziyad’s Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Annisa%20D.%20Febryandini">Annisa D. Febryandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Annexation is a typical construction that commonly used in Arabic language. The use of the construction appears in Arabic speech such as the speech of Thariq bin Ziyad. The speech as one of the most famous speeches in the history of Islam uses many annexations. This qualitative research paper uses the secondary data by library method. Based on the data, this paper concludes that the speech has two basic structures with some variations and has some grammatical relationship. Different from the other researches that identify the speech in sociology field, the speech in this paper will be analyzed in linguistic field to take a look at the structure of its annexation as well as the grammatical relationship. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annexation" title="annexation">annexation</a>, <a href="https://publications.waset.org/abstracts/search?q=Thariq%20bin%20Ziyad" title=" Thariq bin Ziyad"> Thariq bin Ziyad</a>, <a href="https://publications.waset.org/abstracts/search?q=grammatical%20relationship" title=" grammatical relationship"> grammatical relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20syntax" title=" Arabic syntax"> Arabic syntax</a> </p> <a href="https://publications.waset.org/abstracts/72847/annexation-al-iafah-in-thariq-bin-ziyads-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">796</span> Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hai%20Quang%20Hong%20Dam">Hai Quang Hong Dam</a>, <a href="https://publications.waset.org/abstracts/search?q=Hai%20Ho"> Hai Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Minh%20Hoang%20Le%20Ngo"> Minh Hoang Le Ngo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20speech%20separation" title="blind speech separation">blind speech separation</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detector" title=" voice activity detector"> voice activity detector</a>, <a href="https://publications.waset.org/abstracts/search?q=SRP-PHAT" title=" SRP-PHAT"> SRP-PHAT</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20beamformer" title=" optimal beamformer"> optimal beamformer</a> </p> <a href="https://publications.waset.org/abstracts/53263/blind-speech-separation-using-srp-phat-localization-and-optimal-beamformer-in-two-speaker-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">795</span> Speech Impact Realization via Manipulative Argumentation Techniques in Modern American Political Discourse</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zarine%20Avetisyan">Zarine Avetisyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Paper presents the discussion of scholars concerning speech impact, peculiarities of its realization, speech strategies, and techniques. Departing from the viewpoints of many prominent linguists, the paper suggests manipulative argumentation be viewed as a most pervasive speech strategy with a certain set of techniques which are to be found in modern American political discourse. The precedence of their occurrence allows us to regard them as pragmatic patterns of speech impact realization in effective public speaking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20impact" title="speech impact">speech impact</a>, <a href="https://publications.waset.org/abstracts/search?q=manipulative%20argumentation" title=" manipulative argumentation"> manipulative argumentation</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20discourse" title=" political discourse"> political discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=technique" title=" technique"> technique</a> </p> <a href="https://publications.waset.org/abstracts/31058/speech-impact-realization-via-manipulative-argumentation-techniques-in-modern-american-political-discourse" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">509</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">794</span> Speech Enhancement Using Kalman Filter in Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eng.%20Alaa%20K.%20Satti%20Salih">Eng. Alaa K. Satti Salih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autoregressive%20process" title="autoregressive process">autoregressive process</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20speech" title=" noise speech"> noise speech</a> </p> <a href="https://publications.waset.org/abstracts/7182/speech-enhancement-using-kalman-filter-in-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">793</span> Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Ajgou">R. Ajgou</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sbaa"> S. Sbaa</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghendir"> S. Ghendir</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Chemsa"> A. Chemsa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Taleb-Ahmed"> A. Taleb-Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title="speech enhancement">speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=pesq" title=" pesq"> pesq</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a> </p> <a href="https://publications.waset.org/abstracts/31102/comparative-methods-for-speech-enhancement-and-the-effects-on-text-independent-speaker-identification-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">792</span> Freedom of Speech and Involvement in Hatred Speech on Social Media Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sara%20Chinnasamy">Sara Chinnasamy</a>, <a href="https://publications.waset.org/abstracts/search?q=Michelle%20Gun"> Michelle Gun</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Adnan%20Hashim"> M. Adnan Hashim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Federal Constitution guarantees Malaysians the right to free speech and expression; yet hatred speech can be commonly found on social media platforms such as Facebook, Twitter, and Instagram. In Malaysia social media sphere, most hatred speech involves religion, race and politics. Recent cases of racial attacks on social media have created social tensions among Malaysians. Many Malaysians always argue on their rights to freedom of speech. However, there are laws that limit their expression to the public and protecting social media users from being a victim of hate speech. This paper aims to explore the attitude and involvement of Malaysian netizens towards freedom of speech and hatred speech on social media. It also examines the relationship between involvement in hatred speech among Malaysian netizens and attitude towards freedom of speech. For most Malaysians, practicing total freedom of speech in the open is unthinkable. As a result, the best channel to articulate their feelings and opinions liberally is the internet. With the advent of the internet medium, more and more Malaysians are conveying their viewpoints using the various internet channels although sensitivity of the audience is seldom taken into account. Consequently, this situation has led to pockets of social disharmony among the citizens. Although this unhealthy activity is denounced by the authority, netizens are generally of the view that they have the right to write anything they want. Using the quantitative method, survey was conducted among Malaysians aged between 18 and 50 years who are active social media users. Results from the survey reveal that despite a weak relationship level between hatred speech involvement on social media and attitude towards freedom of speech, the association is still considerably significant. As such, it can be safely presumed that hatred speech on social media occurs due to the freedom of speech that exists by way of social media channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20speech" title="freedom of speech">freedom of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=hatred%20speech" title=" hatred speech"> hatred speech</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=Malaysia" title=" Malaysia"> Malaysia</a>, <a href="https://publications.waset.org/abstracts/search?q=netizens" title=" netizens"> netizens</a> </p> <a href="https://publications.waset.org/abstracts/72863/freedom-of-speech-and-involvement-in-hatred-speech-on-social-media-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">791</span> Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Nhan%20Nguyen">Van Nhan Nguyen</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20Holone"> Harald Holone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=asr" title=" asr"> asr</a>, <a href="https://publications.waset.org/abstracts/search?q=air%20traffic%20control" title=" air traffic control"> air traffic control</a>, <a href="https://publications.waset.org/abstracts/search?q=atc" title=" atc"> atc</a> </p> <a href="https://publications.waset.org/abstracts/31004/possibilities-challenges-and-the-state-of-the-art-of-automatic-speech-recognition-in-air-traffic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">790</span> Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nazaket%20Gazieva">Nazaket Gazieva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phonogram" title="phonogram">phonogram</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20characteristics" title=" temporal characteristics"> temporal characteristics</a>, <a href="https://publications.waset.org/abstracts/search?q=fundamental%20frequency" title=" fundamental frequency"> fundamental frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=biometric%20fingerprints" title=" biometric fingerprints"> biometric fingerprints</a> </p> <a href="https://publications.waset.org/abstracts/110332/minimum-data-of-a-speech-signal-as-special-indicators-of-identification-in-phonoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">789</span> Intervention of Self-Limiting L1 Inner Speech during L2 Presentations: A Study of Bangla-English Bilinguals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdul%20Wahid">Abdul Wahid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Inner speech, also known as verbal thinking, self-talk or private speech, is characterized by the subjective language experience in the absence of overt or audible speech. It is a psychological form of verbal activity which is being rehearsed without the articulation of any sound wave. In Psychology, self-limiting speech means the type of speech which contains information that inhibits the development of the self. People, in most cases, experience inner speech in their first language. It is very frequent in Bangladesh where the Bangla (L1) speaking students lose track of speech during their presentations in English (L2). This paper investigates into the long pauses (more than 0.4 seconds long) in English (L2) presentations by Bangla speaking students (18-21 year old) and finds the intervention of Bangla (L1) inner speech as one of its causes. The overt speeches of the presenters are placed on Audacity Audio Editing software where the length of pauses are measured in milliseconds. Varieties of inner speech questionnaire (VISQ) have been conducted randomly amongst the participants out of whom 20 were selected who have similar phenomenology of inner speech. They have been interviewed to describe the type and content of the voices that went on in their head during the long pauses. The qualitative interview data are then codified and converted into quantitative data. It was observed that in more than 80% cases students experience self-limiting inner speech/self-talk during their unwanted pauses in L2 presentations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bangla-English%20Bilinguals" title="Bangla-English Bilinguals">Bangla-English Bilinguals</a>, <a href="https://publications.waset.org/abstracts/search?q=inner%20speech" title=" inner speech"> inner speech</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20intervention%20in%20bilingualism" title=" L1 intervention in bilingualism"> L1 intervention in bilingualism</a>, <a href="https://publications.waset.org/abstracts/search?q=motor%20schema" title=" motor schema"> motor schema</a>, <a href="https://publications.waset.org/abstracts/search?q=pauses" title=" pauses"> pauses</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20loop" title=" phonological loop"> phonological loop</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20store" title=" phonological store"> phonological store</a>, <a href="https://publications.waset.org/abstracts/search?q=working%20memory" title=" working memory"> working memory</a> </p> <a href="https://publications.waset.org/abstracts/128980/intervention-of-self-limiting-l1-inner-speech-during-l2-presentations-a-study-of-bangla-english-bilinguals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">788</span> Performance Evaluation of Acoustic-Spectrographic Voice Identification Method in Native and Non-Native Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Krasnova">E. Krasnova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Shchemelinin"> V. Shchemelinin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with acoustic-spectrographic voice identification method in terms of its performance in non-native language speech. Performance evaluation is conducted by comparing the result of the analysis of recordings containing native language speech with recordings that contain foreign language speech. Our research is based on Tajik and Russian speech of Tajik native speakers due to the character of the criminal situation with drug trafficking. We propose a pilot experiment that represents a primary attempt enter the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title="speaker identification">speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic-spectrographic%20method" title=" acoustic-spectrographic method"> acoustic-spectrographic method</a>, <a href="https://publications.waset.org/abstracts/search?q=non-native%20speech" title=" non-native speech"> non-native speech</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20evaluation" title=" performance evaluation"> performance evaluation</a> </p> <a href="https://publications.waset.org/abstracts/12496/performance-evaluation-of-acoustic-spectrographic-voice-identification-method-in-native-and-non-native-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">787</span> Automatic Segmentation of the Clean Speech Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Ben%20Messaoud">M. A. Ben Messaoud</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Bouzid"> A. Bouzid</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Ellouze"> N. Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The multi-scale product is based on making the product of the speech wavelet transform coefficients at three successive dyadic scales. We have evaluated our method on the Keele database. Experimental results show the effectiveness of our method presenting a good performance. It shows that the two simple features can find word boundaries, and extracted the segments of the clean speech. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiscale%20product" title="multiscale product">multiscale product</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20centroid" title=" spectral centroid"> spectral centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20segmentation" title=" speech segmentation"> speech segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=zero%20crossings%20rate" title=" zero crossings rate"> zero crossings rate</a> </p> <a href="https://publications.waset.org/abstracts/17566/automatic-segmentation-of-the-clean-speech-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">500</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">786</span> The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fawaz%20S.%20Al-Anzi">Fawaz S. Al-Anzi</a>, <a href="https://publications.waset.org/abstracts/search?q=Dia%20AbuZeina"> Dia AbuZeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title="speech recognition">speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20features" title=" acoustic features"> acoustic features</a>, <a href="https://publications.waset.org/abstracts/search?q=mel%20frequency" title=" mel frequency"> mel frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20coefficients" title=" cepstral coefficients"> cepstral coefficients</a> </p> <a href="https://publications.waset.org/abstracts/78382/the-capacity-of-mel-frequency-cepstral-coefficients-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">785</span> Eisenhower’s Farewell Speech: Initial and Continuing Communication Effects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Kuiper">B. Kuiper</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When Dwight D. Eisenhower delivered his final Presidential speech in 1961, he was using the opportunity to bid farewell to America, but he was also trying to warn his fellow countrymen about deeper challenges threatening the country. In this analysis, Eisenhower&rsquo;s speech is examined in light of the impact it had on American culture, communication concepts, and political ramifications. The paper initially highlights the previous literature on the speech, especially in light of its 50<sup>th </sup>anniversary, and reveals a man whose main concern was how the speech&rsquo;s words would affect his beloved country. The painstaking approach to the wording of the speech to reveal the intent is key, particularly in light of analyzing the motivations according to &ldquo;virtuous communication.&rdquo; This philosophical construct indicates that Eisenhower&rsquo;s Farewell Address was crafted carefully according to a departing President&rsquo;s deepest values and concerns, concepts that he wanted to pass along to his successor, to his country, and even to the world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eisenhower" title="Eisenhower">Eisenhower</a>, <a href="https://publications.waset.org/abstracts/search?q=mass%20communication" title=" mass communication"> mass communication</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20speech" title=" political speech"> political speech</a>, <a href="https://publications.waset.org/abstracts/search?q=rhetoric" title=" rhetoric"> rhetoric</a> </p> <a href="https://publications.waset.org/abstracts/50004/eisenhowers-farewell-speech-initial-and-continuing-communication-effects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">784</span> A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qianhua%20He">Qianhua He</a>, <a href="https://publications.waset.org/abstracts/search?q=Weili%20Zhou"> Weili Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Aiwu%20Chen"> Aiwu Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20denoising" title="speech denoising">speech denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=k-singular%20value%20decomposition" title=" k-singular value decomposition"> k-singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20matching%20pursuit" title=" orthogonal matching pursuit"> orthogonal matching pursuit</a> </p> <a href="https://publications.waset.org/abstracts/66670/a-sparse-representation-speech-denoising-method-based-on-adapted-stopping-residue-error" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">499</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">783</span> Parvi̇z Jabrail&#039;s Novel &#039;in Foreign Language&#039;: Delimitation of Postmodernism with Modernism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nargiz%20Ismayilova">Nargiz Ismayilova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The issue of modernism and the concept of postmodernism has been the focus of world researchers for many years, and there are very few researchers who have come to a common denominator about this term. During the independence period, the expansion of the relations of Azerbaijani literature with the world has led to the spread of many currents and tendencies formed in the West to the literary environment in our country. In this context, the works created in our environment are distinguished by their extreme richness in terms of subject matter and diversity in terms of genre. As an interesting example of contemporary postmodern prose in Azerbaijan, Parviz Jabrayil's novel "In a Foreign Language" pays attention with its more different plotline. The disagreement exists among the critics about the novel. Some are looking for high artistry in work; others are satisfied with the elements of postmodernism in work. Delimitation of the border between modernism and postmodernism can serve to carry out a deep scientific study of the novel. The novel depicts the world in the author's consciousness against the background of water shortage (thirst) in the Old City (Icharishahar). The author deconstructs today's Ichari Shahar mould. Along with modernism, elements of postmodernism occupy a large place in the work. When we look at the general tendencies of postmodernist art, we see that science and individuality are questioned, criticizing the sharp boundaries of modernism and the negativity of these restrictions, and modernism offers alternatives to artistic production by identifying its negatives and shortcomings in the areas of artistic freedom. The novel is extremely interesting in this point of view. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=concept%20of%20postmodernism" title="concept of postmodernism">concept of postmodernism</a>, <a href="https://publications.waset.org/abstracts/search?q=modernism" title=" modernism"> modernism</a>, <a href="https://publications.waset.org/abstracts/search?q=delimitation" title=" delimitation"> delimitation</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20postmodernism" title=" political postmodernism"> political postmodernism</a>, <a href="https://publications.waset.org/abstracts/search?q=modern%20postmodern%20prose" title=" modern postmodern prose"> modern postmodern prose</a>, <a href="https://publications.waset.org/abstracts/search?q=Azerbaijani%20literature" title=" Azerbaijani literature"> Azerbaijani literature</a>, <a href="https://publications.waset.org/abstracts/search?q=novel" title=" novel"> novel</a>, <a href="https://publications.waset.org/abstracts/search?q=comparison" title=" comparison"> comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=world%20literature" title=" world literature"> world literature</a>, <a href="https://publications.waset.org/abstracts/search?q=analysis" title=" analysis"> analysis</a> </p> <a href="https://publications.waset.org/abstracts/129909/parviz-jabrails-novel-in-foreign-language-delimitation-of-postmodernism-with-modernism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">782</span> Speech Acts and Politeness Strategies in an EFL Classroom in Georgia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tinatin%20Kurdghelashvili">Tinatin Kurdghelashvili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with the usage of speech acts and politeness strategies in an EFL classroom in Georgia (Rep of). It explores the students’ and the teachers’ practice of the politeness strategies and the speech acts of apology, thanking, request, compliment/encouragement, command, agreeing/disagreeing, addressing and code switching. The research method includes observation as well as a questionnaire. The target group involves the students from Georgian public schools and two certified, experienced local English teachers. The analysis is based on Searle’s Speech Act Theory and Brown and Levinson’s politeness strategies. The findings show that the students have certain knowledge regarding politeness yet they fail to apply them in English communication. In addition, most of the speech acts from the classroom interaction are used by the teachers and not the students. Thereby, it is suggested that teachers should cultivate the students’ communicative competence and attempt to give them opportunities to practice more English speech acts than they do today. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=english%20as%20a%20foreign%20language" title="english as a foreign language">english as a foreign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Georgia" title=" Georgia"> Georgia</a>, <a href="https://publications.waset.org/abstracts/search?q=politeness%20principles" title=" politeness principles"> politeness principles</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20acts" title=" speech acts"> speech acts</a> </p> <a href="https://publications.waset.org/abstracts/17320/speech-acts-and-politeness-strategies-in-an-efl-classroom-in-georgia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17320.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">636</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">781</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Shoiynbek">A. Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Kozhakhmet"> K. Kozhakhmet</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Menezes"> P. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Kuanyshbay"> D. Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Bayazitov"> D. Bayazitov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/152814/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152814.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">780</span> The Influence of Advertising Captions on the Internet through the Consumer Purchasing Decision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Suwimol%20Apapol">Suwimol Apapol</a>, <a href="https://publications.waset.org/abstracts/search?q=Punrapha%20Praditpong"> Punrapha Praditpong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives of the study were to find out the frequencies of figures of speech in fragrance advertising captions as well as the types of figures of speech most commonly applied in captions. The relation between figures of speech and fragrance was also examined in order to analyze how figures of speech were used to represent fragrance. Thirty-five fragrance advertisements were randomly selected from the Internet. Content analysis was applied in order to consider the relation between figures of speech and fragrance. The results showed that figures of speech were found in almost every fragrance advertisement except one advertisement of several Goods service. Thirty-four fragrance advertising captions used at least one kind of figure of speech. Metaphor was most frequently found and also most frequently applied in fragrance advertising captions, followed by alliteration, rhyme, simile and personification, and hyperbole respectively which is in harmony with the research hypotheses as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advertising%20captions" title="advertising captions">advertising captions</a>, <a href="https://publications.waset.org/abstracts/search?q=captions%20on%20internet" title=" captions on internet"> captions on internet</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20purchasing%20decision" title=" consumer purchasing decision"> consumer purchasing decision</a>, <a href="https://publications.waset.org/abstracts/search?q=e-commerce" title=" e-commerce"> e-commerce</a> </p> <a href="https://publications.waset.org/abstracts/39966/the-influence-of-advertising-captions-on-the-internet-through-the-consumer-purchasing-decision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39966.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">270</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">779</span> A Novel RLS Based Adaptive Filtering Method for Speech Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pogula%20Rakesh">Pogula Rakesh</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Kishore%20Kumar"> T. Kishore Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptive%20filter" title="adaptive filter">adaptive filter</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20noise%20canceller" title=" adaptive noise canceller"> adaptive noise canceller</a>, <a href="https://publications.waset.org/abstracts/search?q=mean%20squared%20error" title=" mean squared error"> mean squared error</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20reduction" title=" noise reduction"> noise reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=NLMS" title=" NLMS"> NLMS</a>, <a href="https://publications.waset.org/abstracts/search?q=RLS" title=" RLS"> RLS</a>, <a href="https://publications.waset.org/abstracts/search?q=SNR" title=" SNR"> SNR</a>, <a href="https://publications.waset.org/abstracts/search?q=SNR%20loss" title=" SNR loss"> SNR loss</a> </p> <a href="https://publications.waset.org/abstracts/16212/a-novel-rls-based-adaptive-filtering-method-for-speech-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16212.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">481</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">778</span> Prosodic Characteristics of Post Traumatic Stress Disorder Induced Speech Changes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jarek%20Krajewski">Jarek Krajewski</a>, <a href="https://publications.waset.org/abstracts/search?q=Andre%20Wittenborn"> Andre Wittenborn</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Sauerland"> Martin Sauerland</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This abstract describes a promising approach for estimating post-traumatic stress disorder (PTSD) based on prosodic speech characteristics. It illustrates the validity of this method by briefly discussing results from an Arabic refugee sample (N= 47, 32 m, 15 f). A well-established standardized self-report scale “Reaction of Adolescents to Traumatic Stress” (RATS) was used to determine the ground truth level of PTSD. The speech material was prompted by telling about autobiographical related sadness inducing experiences (sampling rate 16 kHz, 8 bit resolution). In order to investigate PTSD-induced speech changes, a self-developed set of 136 prosodic speech features was extracted from the .wav files. This set was adapted to capture traumatization related speech phenomena. An artificial neural network (ANN) machine learning model was applied to determine the PTSD level and reached a correlation of r = .37. These results indicate that our classifiers can achieve similar results to those seen in speech-based stress research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20prosody" title="speech prosody">speech prosody</a>, <a href="https://publications.waset.org/abstracts/search?q=PTSD" title=" PTSD"> PTSD</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/153941/prosodic-characteristics-of-post-traumatic-stress-disorder-induced-speech-changes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153941.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">777</span> An Algorithm Based on the Nonlinear Filter Generator for Speech Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Belmeguenai">A. Belmeguenai</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Mansouri"> K. Mansouri</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Djemili"> R. Djemili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work present a new algorithm based on the nonlinear filter generator for speech encryption and decryption. The proposed algorithm consists on the use a linear feedback shift register (LFSR) whose polynomial is primitive and nonlinear Boolean function. The purpose of this system is to construct Keystream with good statistical properties, but also easily computable on a machine with limited capacity calculated. This proposed speech encryption scheme is very simple, highly efficient, and fast to implement the speech encryption and decryption. We conclude the paper by showing that this system can resist certain known attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20filter%20generator" title="nonlinear filter generator">nonlinear filter generator</a>, <a href="https://publications.waset.org/abstracts/search?q=stream%20ciphers" title=" stream ciphers"> stream ciphers</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20encryption" title=" speech encryption"> speech encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20analysis" title=" security analysis"> security analysis</a> </p> <a href="https://publications.waset.org/abstracts/39095/an-algorithm-based-on-the-nonlinear-filter-generator-for-speech-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39095.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">776</span> Review of Speech Recognition Research on Low-Resource Languages</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=XuKe%20Cao">XuKe Cao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper reviews the current state of research on low-resource languages in the field of speech recognition, focusing on the challenges faced by low-resource language speech recognition, including the scarcity of data resources, the lack of linguistic resources, and the diversity of dialects and accents. The article reviews recent progress in low-resource language speech recognition, including techniques such as data augmentation, end to-end models, transfer learning, and multi-task learning. Based on the challenges currently faced, the paper also provides an outlook on future research directions. Through these studies, it is expected that the performance of speech recognition for low resource languages can be improved, promoting the widespread application and adoption of related technologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low-resource%20languages" title="low-resource languages">low-resource languages</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation%20techniques" title=" data augmentation techniques"> data augmentation techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=NLP" title=" NLP"> NLP</a> </p> <a href="https://publications.waset.org/abstracts/193863/review-of-speech-recognition-research-on-low-resource-languages" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">14</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">775</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisultan%20Shoiynbek">Aisultan Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=Darkhan%20Kuanyshbay"> Darkhan Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Menezes"> Paulo Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Akbayan%20Bekarystankyzy"> Akbayan Bekarystankyzy</a>, <a href="https://publications.waset.org/abstracts/search?q=Assylbek%20Mukhametzhanov"> Assylbek Mukhametzhanov</a>, <a href="https://publications.waset.org/abstracts/search?q=Temirlan%20Shoiynbek"> Temirlan Shoiynbek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/189328/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189328.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">774</span> Modern Machine Learning Conniptions for Automatic Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Jagadeesh%20Kumar">S. Jagadeesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This expose presents a luculent of recent machine learning practices as employed in the modern and as pertinent to prospective automatic speech recognition schemes. The aspiration is to promote additional traverse ablution among the machine learning and automatic speech recognition factions that have transpired in the precedent. The manuscript is structured according to the chief machine learning archetypes that are furthermore trendy by now or have latency for building momentous hand-outs to automatic speech recognition expertise. The standards offered and convoluted in this article embraces adaptive and multi-task learning, active learning, Bayesian learning, discriminative learning, generative learning, supervised and unsupervised learning. These learning archetypes are aggravated and conferred in the perspective of automatic speech recognition tools and functions. This manuscript bequeaths and surveys topical advances of deep learning and learning with sparse depictions; further limelight is on their incessant significance in the evolution of automatic speech recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20methods" title=" deep learning methods"> deep learning methods</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20archetypes" title=" machine learning archetypes"> machine learning archetypes</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20learning" title=" Bayesian learning"> Bayesian learning</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20and%20unsupervised%20learning" title=" supervised and unsupervised learning"> supervised and unsupervised learning</a> </p> <a href="https://publications.waset.org/abstracts/71467/modern-machine-learning-conniptions-for-automatic-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=26">26</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=27">27</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=delimitation%20of%20speech&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10