CINXE.COM
Search results for: Kazakh speech dataset
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Kazakh speech dataset</title> <meta name="description" content="Search results for: Kazakh speech dataset"> <meta name="keywords" content="Kazakh speech dataset"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Kazakh speech dataset" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Kazakh speech dataset"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1937</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Kazakh speech dataset</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1937</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Shoiynbek">A. Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Kozhakhmet"> K. Kozhakhmet</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Menezes"> P. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Kuanyshbay"> D. Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Bayazitov"> D. Bayazitov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/152814/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152814.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1936</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisultan%20Shoiynbek">Aisultan Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=Darkhan%20Kuanyshbay"> Darkhan Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Menezes"> Paulo Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Akbayan%20Bekarystankyzy"> Akbayan Bekarystankyzy</a>, <a href="https://publications.waset.org/abstracts/search?q=Assylbek%20Mukhametzhanov"> Assylbek Mukhametzhanov</a>, <a href="https://publications.waset.org/abstracts/search?q=Temirlan%20Shoiynbek"> Temirlan Shoiynbek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/189328/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189328.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1935</span> Developing Kazakh Language Fluency Test in Nazarbayev University </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saule%20Mussabekova">Saule Mussabekova</a>, <a href="https://publications.waset.org/abstracts/search?q=Samal%20Abzhanova"> Samal Abzhanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Kazakh Language Fluency Test, based on the IELTS exam, was implemented in 2012 at Nazarbayev University in Astana, Kazakhstan. We would like to share our experience in developing this exam and some exam results with other language instructors. In this paper, we will cover all these peculiarities and their related issues. The Kazakh Language Fluency Test is a young exam. During its development, we faced many difficulties. One of the goals of the university and the country is to encourage fluency in the Kazakh language for all citizens of the Republic. Nazarbayev University has introduced a Kazakh language program to assist in achieving this goal. This policy is one-step in ensuring that NU students have a thorough understanding of the Kazakh language through a fluency test based on the International English Language Testing System (IELTS). The Kazakh Language Fluency Test exam aims to determine student’s knowledge of Kazakh language. The fact is that there are three types of students at Nazarbayev University: Kazakh-speaking heritage learners, Russian-speaking and English-speaking students. Unfortunately, we have Kazakh students who do not speak Kazakh. All students who finished school with Russian language instruction are given Kazakh Language Fluency Test in order to determine their Kazakh level. After the test exam, all students can choose appropriate Kazakh course: Basic Kazakh, Intermediate Kazakh and Upper-Intermediate Kazakh. The Kazakh Language Fluency Test consists of four parts: Listening, Reading, Writing and Speaking. They are taken on the same day in the abovementioned order. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diagnostic%20test" title="diagnostic test">diagnostic test</a>, <a href="https://publications.waset.org/abstracts/search?q=kazakh%20language" title=" kazakh language"> kazakh language</a>, <a href="https://publications.waset.org/abstracts/search?q=placement%20test" title=" placement test"> placement test</a>, <a href="https://publications.waset.org/abstracts/search?q=test%20result" title=" test result"> test result</a> </p> <a href="https://publications.waset.org/abstracts/46325/developing-kazakh-language-fluency-test-in-nazarbayev-university" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46325.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1934</span> Spoken Subcorpus of the Kazakh Language: History, Content, Methodology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kuralay%20Bimoldaevna%20Kuderinova">Kuralay Bimoldaevna Kuderinova</a>, <a href="https://publications.waset.org/abstracts/search?q=Beisenkhan%20Samal"> Beisenkhan Samal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The history of creating a linguistic corpus in Kazakh linguistics begins only in 2016. Though within this short period of time, the linguistic corpus has become a national corpus and its several subcorpora, namely historical, cultural, spoken, dialectological, writers’ subcorpus, proverbs subcorpus and poetic texts subcorpus, have appeared and are working effectively. Among them, the spoken corpus has its own characteristics. The Kazakh language is one of the languages belonging to the Kypchak-Nogai group of Turkic peoples. The Kazakh language is a language that, as a part of the former Soviet Union, was directly influenced by the Russian language and underwent major changes in its spoken and written forms. After the Republic of Kazakhstan gained independence, the Kazakh language received the status of the state language in 1991. However, today, the prestige of the Russian language is still higher than that of the Kazakh language. Therefore, the direct influence of the Russian language on the structure, style, and vocabulary of the Kazakh language continues. In particular, it can be said that the national practice of the spoken language is disappearing, as the spoken form of Kazakh is not used in official gatherings and events of state importance. In this regard, it is very important to collect and preserve examples of spoken language. Recording exemplary spoken texts, converting them into written form, and providing their audio along with orphoepic explanations will serve as a valuable tool for teaching and learning the Kazakh language. Therefore, the report will cover interesting aspects and scientific foundations related to the creation, content, and methodology of the oral subcorpus of the Kazakh language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spoken%20corpus" title="spoken corpus">spoken corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20language" title=" Kazakh language"> Kazakh language</a>, <a href="https://publications.waset.org/abstracts/search?q=orthoepic%20norm" title=" orthoepic norm"> orthoepic norm</a>, <a href="https://publications.waset.org/abstracts/search?q=LLM" title=" LLM"> LLM</a> </p> <a href="https://publications.waset.org/abstracts/192605/spoken-subcorpus-of-the-kazakh-language-history-content-methodology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192605.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">8</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1933</span> Developing an Intonation Labeled Dataset for Hindi</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Esha%20Banerjee">Esha Banerjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Atul%20Kumar%20Ojha"> Atul Kumar Ojha</a>, <a href="https://publications.waset.org/abstracts/search?q=Girish%20Nath%20Jha"> Girish Nath Jha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to develop an intonation labeled database for Hindi. Although no single standard for prosody labeling exists in Hindi, researchers in the past have employed perceptual and statistical methods in literature to draw inferences about the behavior of prosody patterns in Hindi. Based on such existing research and largely agreed upon intonational theories in Hindi, this study attempts to develop a manually annotated prosodic corpus of Hindi speech data, which can be used for training speech models for natural-sounding speech in the future. 100 sentences ( 500 words) each for declarative and interrogative types have been labeled using Praat. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20dataset" title="speech dataset">speech dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Hindi" title=" Hindi"> Hindi</a>, <a href="https://publications.waset.org/abstracts/search?q=intonation" title=" intonation"> intonation</a>, <a href="https://publications.waset.org/abstracts/search?q=labeled%20corpus" title=" labeled corpus"> labeled corpus</a> </p> <a href="https://publications.waset.org/abstracts/142503/developing-an-intonation-labeled-dataset-for-hindi" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142503.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">199</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1932</span> The Application of a Hybrid Neural Network for Recognition of a Handwritten Kazakh Text</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Almagul%20%20Assainova">Almagul Assainova </a>, <a href="https://publications.waset.org/abstracts/search?q=Dariya%20Abykenova"> Dariya Abykenova</a>, <a href="https://publications.waset.org/abstracts/search?q=Liudmila%20Goncharenko"> Liudmila Goncharenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergey%20%20Sybachin"> Sergey Sybachin</a>, <a href="https://publications.waset.org/abstracts/search?q=Saule%20Rakhimova"> Saule Rakhimova</a>, <a href="https://publications.waset.org/abstracts/search?q=Abay%20Aman"> Abay Aman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The recognition of a handwritten Kazakh text is a relevant objective today for the digitization of materials. The study presents a model of a hybrid neural network for handwriting recognition, which includes a convolutional neural network and a multi-layer perceptron. Each network includes 1024 input neurons and 42 output neurons. The model is implemented in the program, written in the Python programming language using the EMNIST database, NumPy, Keras, and Tensorflow modules. The neural network training of such specific letters of the Kazakh alphabet as ә, ғ, қ, ң, ө, ұ, ү, h, і was conducted. The neural network model and the program created on its basis can be used in electronic document management systems to digitize the Kazakh text. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=handwriting%20recognition%20system" title="handwriting recognition system">handwriting recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20font" title=" Kazakh font"> Kazakh font</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/129773/the-application-of-a-hybrid-neural-network-for-recognition-of-a-handwritten-kazakh-text" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129773.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1931</span> Kazakh Language Assessment in a New Multilingual Kazakhstan </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Karlygash%20Adamova">Karlygash Adamova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article is focused on the KazTest as one of the most important high-stakes tests and the key tool in Kazakh language assessment. The research will also include the brief introduction to the language policy in Kazakhstan. Particularly, it is going to be changed significantly and turn from bilingualism (Kazakh, Russian) to multilingual policy (three languages - Kazakh, Russian, English). Therefore, the current status of the abovementioned languages will be described. Due to the various educational reforms in the country, the language evaluation system should also be improved and moderated. The research will present the most significant test of Kazakhstan – the KazTest, which is aimed to evaluate the Kazakh language proficiency. Assessment is an ongoing process that encompasses a wide area of knowledge upon the productive performance of the learners. Test is widely defined as a standardized or standard method of research, testing, diagnostics, verification, etc. The two most important characteristics of any test, as the main element of the assessment - validity and reliability - will also be described in this paper. Therefore, the preparation and design of the test, which is assumed to be an indicator of knowledge, and it is highly important to take into account all these properties. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multilingualism" title="multilingualism">multilingualism</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20assessment" title=" language assessment"> language assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=testing" title=" testing"> testing</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20policy" title=" language policy"> language policy</a> </p> <a href="https://publications.waset.org/abstracts/121447/kazakh-language-assessment-in-a-new-multilingual-kazakhstan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121447.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1930</span> The Influence of English Learning on Ethnic Kazakh Minority Students’ Identity (Re)Construction at Chinese Universities </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sharapat%20Sharapat">Sharapat Sharapat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> English language is perceived as cultural capital in many non-native English-speaking countries, and minority groups in these social contexts seem to invest in the language to be empowered and reposition themselves from the imbalanced power relation with the dominant group. This study is devoted to explore how English learning influence minority Kazakh students’ identity (re)construction at Chinese universities from the scope of ‘imagined community, investment, and identity’ theory of Norton (2013). To this end the three research questions were designed as follows: 1) Kazakh minority students’ English learning experiences at Chinese universities; 2) Kazakh minority students’ views about benefits and opportunities of English learning; 3) the influence of English learning on Kazakh minority students’ identity (re)construction. The study employs an interview-based qualitative research method by interviewing nine Kazakh minority students in universities in Xinjiang and other inland cities in China. The findings suggest that through English learning, some students have reconstructed multiple identities as multicultural and global identities, which created ‘a third space’ to break limits of their ethnic and national identities and confused identity as someone in-between. Meanwhile, most minority students were empowered by the English language to resist inferior or marginalized positions and reconstruct imagined elite identity. However, English learning disempowered students who have little previous English education in school and placed them on unequal footing with other students, which further escalated the educational inequities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=minority%20in%20China" title="minority in China">minority in China</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20construction" title=" identity construction"> identity construction</a>, <a href="https://publications.waset.org/abstracts/search?q=multilingual%20education" title=" multilingual education"> multilingual education</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20empowerment" title=" language empowerment"> language empowerment</a> </p> <a href="https://publications.waset.org/abstracts/129160/the-influence-of-english-learning-on-ethnic-kazakh-minority-students-identity-reconstruction-at-chinese-universities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129160.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1929</span> Hate Speech Detection in Tunisian Dialect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Helmi%20Baazaoui">Helmi Baazaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Zrigui"> Mounir Zrigui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study addresses the challenge of hate speech detection in Tunisian Arabic text, a critical issue for online safety and moderation. Leveraging the strengths of the AraBERT model, we fine-tuned and evaluated its performance against the Bi-LSTM model across four distinct datasets: T-HSAB, TNHS, TUNIZI-Dataset, and a newly compiled dataset with diverse labels such as Offensive Language, Racism, and Religious Intolerance. Our experimental results demonstrate that AraBERT significantly outperforms Bi-LSTM in terms of Recall, Precision, F1-Score, and Accuracy across all datasets. The findings underline the robustness of AraBERT in capturing the nuanced features of Tunisian Arabic and its superior capability in classification tasks. This research not only advances the technology for hate speech detection but also provides practical implications for social media moderation and policy-making in Tunisia. Future work will focus on expanding the datasets and exploring more sophisticated architectures to further enhance detection accuracy, thus promoting safer online interactions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hate%20speech%20detection" title="hate speech detection">hate speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Tunisian%20Arabic" title=" Tunisian Arabic"> Tunisian Arabic</a>, <a href="https://publications.waset.org/abstracts/search?q=AraBERT" title=" AraBERT"> AraBERT</a>, <a href="https://publications.waset.org/abstracts/search?q=Bi-LSTM" title=" Bi-LSTM"> Bi-LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=Gemini%20annotation%20tool" title=" Gemini annotation tool"> Gemini annotation tool</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media%20moderation" title=" social media moderation"> social media moderation</a> </p> <a href="https://publications.waset.org/abstracts/193877/hate-speech-detection-in-tunisian-dialect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193877.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">11</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1928</span> Language Factor in the Formation of National and Cultural Identity of Kazakhstan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andabayeva%20Dina">Andabayeva Dina</a>, <a href="https://publications.waset.org/abstracts/search?q=Avakova%20Raushangul"> Avakova Raushangul</a>, <a href="https://publications.waset.org/abstracts/search?q=Kortabayeva%20Gulzhamal"> Kortabayeva Gulzhamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Rakhymbay%20Bauyrzhan"> Rakhymbay Bauyrzhan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article attempts to give an overview of the language situation and language planning in Kazakhstan. Statistical data is given and excursion to history of languages in Kazakhstan is done. Particular emphasis is placed on the national- cultural component of the Kazakh people, namely the impact of the specificity of the Kazakh language on ethnic identity. Language is one of the basic aspects of national identity. Recently, in the Republic of Kazakhstan purposeful work on language development has been conducted. Optimal solution of language problems is a factor of interethnic relations harmonization, strengthening and consolidation of the peoples and public consent. Development of languages - one of the important directions of the state policy in the Republic of Kazakhstan. The problem of the state language, as part of national (civil) identification play a huge role in the successful integration process of Kazakh society. And quite rightly assume that one of the foundations of a new civic identity is knowing Kazakh language by all citizens of Kazakhstan. The article is an analysis of the language situation in Kazakhstan in close connection with the peculiarities of cultural identity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kazakhstan" title="Kazakhstan">Kazakhstan</a>, <a href="https://publications.waset.org/abstracts/search?q=mentality" title=" mentality"> mentality</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20policy" title=" language policy"> language policy</a>, <a href="https://publications.waset.org/abstracts/search?q=ethnolinguistics" title=" ethnolinguistics"> ethnolinguistics</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20planning" title=" language planning"> language planning</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20personality" title=" language personality"> language personality</a> </p> <a href="https://publications.waset.org/abstracts/22912/language-factor-in-the-formation-of-national-and-cultural-identity-of-kazakhstan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">635</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1927</span> Robust Noisy Speech Identification Using Frame Classifier Derived Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Punnoose%20A.%20K.">Punnoose A. K.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for identifying noisy speech recording using a multi-layer perception (MLP) trained to predict phonemes from acoustic features. Characteristics of the MLP posteriors are explored for clean speech and noisy speech at the frame level. Appropriate density functions are used to fit the softmax probability of the clean and noisy speech. A function that takes into account the ratio of the softmax probability density of noisy speech to clean speech is formulated. These phoneme independent scoring is weighted using a phoneme-specific weightage to make the scoring more robust. Simple thresholding is used to identify the noisy speech recording from the clean speech recordings. The approach is benchmarked on standard databases, with a focus on precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=noisy%20speech%20identification" title="noisy speech identification">noisy speech identification</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20pre-processing" title=" speech pre-processing"> speech pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20robustness" title=" noise robustness"> noise robustness</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20engineering" title=" feature engineering"> feature engineering</a> </p> <a href="https://publications.waset.org/abstracts/144694/robust-noisy-speech-identification-using-frame-classifier-derived-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144694.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1926</span> An Analysis of Illocutioary Act in Martin Luther King Jr.'s Propaganda Speech Entitled 'I Have a Dream'</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahgfirah%20Firdaus%20Soberatta">Mahgfirah Firdaus Soberatta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language cannot be separated from human life. Humans use language to convey ideas, thoughts, and feelings. We can use words for different things for example like asserted, advising, promise, give opinions, hopes, etc. Propaganda is an attempt which seeks to obtain stable behavior to adopt everyone to his everyday life. It also controls the thoughts and attitudes of individuals in social settings permanent. In this research, the writer will discuss about the speech act in a propaganda speech delivered by Martin Luther King Jr. in Washington at Lincoln Memorial on August 28, 1963. 'I Have a Dream' is a public speech delivered by American civil rights activist MLK, he calls from an end to racism in USA. In this research, the writer uses Searle theory to analyze the types of illocutionary speech act that used by Martin Luther King Jr. in his propaganda speech. In this research, the writer uses a qualitative method described in descriptive, because the research wants to describe and explain the types of illocutionary speech acts used by Martin Luther King Jr. in his propaganda speech. The findings indicate that there are five types of speech acts in Martin Luther King Jr. speech. MLK also used direct speech and indirect speech in his propaganda speech. However, direct speech is the dominant speech act that MLK used in his propaganda speech. It is hoped that this research is useful for the readers to enrich their knowledge in a particular field of pragmatic speech acts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20act" title="speech act">speech act</a>, <a href="https://publications.waset.org/abstracts/search?q=propaganda" title=" propaganda"> propaganda</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Luther%20King%20Jr." title=" Martin Luther King Jr."> Martin Luther King Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a> </p> <a href="https://publications.waset.org/abstracts/45649/an-analysis-of-illocutioary-act-in-martin-luther-king-jrs-propaganda-speech-entitled-i-have-a-dream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">441</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1925</span> The Online Advertising Speech that Effect to the Thailand Internet User Decision Making</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Panprae%20Bunyapukkna">Panprae Bunyapukkna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated figures of speech used in fragrance advertising captions on the Internet. The objectives of the study were to find out the frequencies of figures of speech in fragrance advertising captions and the types of figures of speech most commonly applied in captions. The relation between figures of speech and fragrance was also examined in order to analyze how figures of speech were used to represent fragrance. Thirty-five fragrance advertisements were randomly selected from the Internet. Content analysis was applied in order to consider the relation between figures of speech and fragrance. The results showed that figures of speech were found in almost every fragrance advertisement except one advertisement of Lancôme. Thirty-four fragrance advertising captions used at least one kind of figure of speech. Metaphor was most frequently found and also most frequently applied in fragrance advertising captions, followed by alliteration, rhyme, simile and personification, and hyperbole respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advertising%20speech" title="advertising speech">advertising speech</a>, <a href="https://publications.waset.org/abstracts/search?q=fragrance%20advertisements" title=" fragrance advertisements"> fragrance advertisements</a>, <a href="https://publications.waset.org/abstracts/search?q=figures%20of%20speech" title=" figures of speech"> figures of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor" title=" metaphor"> metaphor</a> </p> <a href="https://publications.waset.org/abstracts/44259/the-online-advertising-speech-that-effect-to-the-thailand-internet-user-decision-making" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1924</span> Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nassib%20Abdallah">Nassib Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Pierre%20Chauvet"> Pierre Chauvet</a>, <a href="https://publications.waset.org/abstracts/search?q=Abd%20El%20Salam%20Hajjar"> Abd El Salam Hajjar</a>, <a href="https://publications.waset.org/abstracts/search?q=Bassam%20Daya"> Bassam Daya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain-computer%20interface" title="brain-computer interface">brain-computer interface</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=electroencephalography" title=" electroencephalography"> electroencephalography</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG" title=" EEG"> EEG</a>, <a href="https://publications.waset.org/abstracts/search?q=wernicke%20area" title=" wernicke area"> wernicke area</a> </p> <a href="https://publications.waset.org/abstracts/86773/optimized-brain-computer-interface-system-for-unspoken-speech-recognition-role-of-wernicke-area" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86773.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1923</span> TeleMe Speech Booster: Web-Based Speech Therapy and Training Program for Children with Articulation Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Treerattanaphan">C. Treerattanaphan</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Boonpramuk"> P. Boonpramuk</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Singla"> P. Singla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Frequent, continuous speech training has proven to be a necessary part of a successful speech therapy process, but constraints of traveling time and employment dispensation become key obstacles especially for individuals living in remote areas or for dependent children who have working parents. In order to ameliorate speech difficulties with ample guidance from speech therapists, a website has been developed that supports speech therapy and training for people with articulation disorders in the standard Thai language. This web-based program has the ability to record speech training exercises for each speech trainee. The records will be stored in a database for the speech therapist to investigate, evaluate, compare and keep track of all trainees’ progress in detail. Speech trainees can request live discussions via video conference call when needed. Communication through this web-based program facilitates and reduces training time in comparison to walk-in training or appointments. This type of training also allows people with articulation disorders to practice speech lessons whenever or wherever is convenient for them, which can lead to a more regular training processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=web-based%20remote%20training%20program" title="web-based remote training program">web-based remote training program</a>, <a href="https://publications.waset.org/abstracts/search?q=Thai%20speech%20therapy" title=" Thai speech therapy"> Thai speech therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=articulation%20disorders" title=" articulation disorders"> articulation disorders</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20booster" title=" speech booster"> speech booster</a> </p> <a href="https://publications.waset.org/abstracts/13916/teleme-speech-booster-web-based-speech-therapy-and-training-program-for-children-with-articulation-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1922</span> Development of Non-Intrusive Speech Evaluation Measure Using S-Transform and Light-Gbm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tusar%20Kanti%20Dash">Tusar Kanti Dash</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganapati%20Panda"> Ganapati Panda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The evaluation of speech quality and intelligence is critical to the overall effectiveness of the Speech Enhancement Algorithms. Several intrusive and non-intrusive measures are employed to calculate these parameters. Non-Intrusive Evaluation is most challenging as, very often, the reference clean speech data is not available. In this paper, a novel non-intrusive speech evaluation measure is proposed using audio features derived from the Stockwell transform. These features are used with the Light Gradient Boosting Machine for the effective prediction of speech quality and intelligibility. The proposed model is analyzed using noisy and reverberant speech from four databases, and the results are compared with the standard Intrusive Evaluation Measures. It is observed from the comparative analysis that the proposed model is performing better than the standard Non-Intrusive models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=non-Intrusive%20speech%20evaluation" title="non-Intrusive speech evaluation">non-Intrusive speech evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=S-transform" title=" S-transform"> S-transform</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20GBM" title=" light GBM"> light GBM</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20quality" title=" speech quality"> speech quality</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20intelligibility" title=" and intelligibility"> and intelligibility</a> </p> <a href="https://publications.waset.org/abstracts/139626/development-of-non-intrusive-speech-evaluation-measure-using-s-transform-and-light-gbm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139626.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1921</span> Annexation (Al-Iḍāfah) in Thariq bin Ziyad’s Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Annisa%20D.%20Febryandini">Annisa D. Febryandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Annexation is a typical construction that commonly used in Arabic language. The use of the construction appears in Arabic speech such as the speech of Thariq bin Ziyad. The speech as one of the most famous speeches in the history of Islam uses many annexations. This qualitative research paper uses the secondary data by library method. Based on the data, this paper concludes that the speech has two basic structures with some variations and has some grammatical relationship. Different from the other researches that identify the speech in sociology field, the speech in this paper will be analyzed in linguistic field to take a look at the structure of its annexation as well as the grammatical relationship. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annexation" title="annexation">annexation</a>, <a href="https://publications.waset.org/abstracts/search?q=Thariq%20bin%20Ziyad" title=" Thariq bin Ziyad"> Thariq bin Ziyad</a>, <a href="https://publications.waset.org/abstracts/search?q=grammatical%20relationship" title=" grammatical relationship"> grammatical relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20syntax" title=" Arabic syntax"> Arabic syntax</a> </p> <a href="https://publications.waset.org/abstracts/72847/annexation-al-iafah-in-thariq-bin-ziyads-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1920</span> Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hai%20Quang%20Hong%20Dam">Hai Quang Hong Dam</a>, <a href="https://publications.waset.org/abstracts/search?q=Hai%20Ho"> Hai Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Minh%20Hoang%20Le%20Ngo"> Minh Hoang Le Ngo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20speech%20separation" title="blind speech separation">blind speech separation</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detector" title=" voice activity detector"> voice activity detector</a>, <a href="https://publications.waset.org/abstracts/search?q=SRP-PHAT" title=" SRP-PHAT"> SRP-PHAT</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20beamformer" title=" optimal beamformer"> optimal beamformer</a> </p> <a href="https://publications.waset.org/abstracts/53263/blind-speech-separation-using-srp-phat-localization-and-optimal-beamformer-in-two-speaker-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1919</span> The Advancements of Transformer Models in Part-of-Speech Tagging System for Low-Resource Tigrinya Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamm%20Kidane">Shamm Kidane</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Abdella"> Ibrahim Abdella</a>, <a href="https://publications.waset.org/abstracts/search?q=Fitsum%20Gaim"> Fitsum Gaim</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Mulugeta"> Simon Mulugeta</a>, <a href="https://publications.waset.org/abstracts/search?q=Sirak%20Asmerom"> Sirak Asmerom</a>, <a href="https://publications.waset.org/abstracts/search?q=Natnael%20Ambasager"> Natnael Ambasager</a>, <a href="https://publications.waset.org/abstracts/search?q=Yoel%20Ghebrihiwot"> Yoel Ghebrihiwot</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The call for natural language processing (NLP) systems for low-resource languages has become more apparent than ever in the past few years, with the arduous challenges still present in preparing such systems. This paper presents an improved dataset version of the Nagaoka Tigrinya Corpus for Parts-of-Speech (POS) classification system in the Tigrinya language. The size of the initial Nagaoka dataset was incremented, totaling the new tagged corpus to 118K tokens, which comprised the 12 basic POS annotations used previously. The additional content was also annotated manually in a stringent manner, followed similar rules to the former dataset and was formatted in CONLL format. The system made use of the novel approach in NLP tasks and use of the monolingually pre-trained TiELECTRA, TiBERT and TiRoBERTa transformer models. The highest achieved score is an impressive weighted F1-score of 94.2%, which surpassed the previous systems by a significant measure. The system will prove useful in the progress of NLP-related tasks for Tigrinya and similarly related low-resource languages with room for cross-referencing higher-resource languages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tigrinya%20POS%20corpus" title="Tigrinya POS corpus">Tigrinya POS corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=TiBERT" title=" TiBERT"> TiBERT</a>, <a href="https://publications.waset.org/abstracts/search?q=TiRoBERTa" title=" TiRoBERTa"> TiRoBERTa</a>, <a href="https://publications.waset.org/abstracts/search?q=conditional%20random%20fields" title=" conditional random fields"> conditional random fields</a> </p> <a href="https://publications.waset.org/abstracts/177822/the-advancements-of-transformer-models-in-part-of-speech-tagging-system-for-low-resource-tigrinya-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1918</span> Speech Impact Realization via Manipulative Argumentation Techniques in Modern American Political Discourse</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zarine%20Avetisyan">Zarine Avetisyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Paper presents the discussion of scholars concerning speech impact, peculiarities of its realization, speech strategies, and techniques. Departing from the viewpoints of many prominent linguists, the paper suggests manipulative argumentation be viewed as a most pervasive speech strategy with a certain set of techniques which are to be found in modern American political discourse. The precedence of their occurrence allows us to regard them as pragmatic patterns of speech impact realization in effective public speaking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20impact" title="speech impact">speech impact</a>, <a href="https://publications.waset.org/abstracts/search?q=manipulative%20argumentation" title=" manipulative argumentation"> manipulative argumentation</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20discourse" title=" political discourse"> political discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=technique" title=" technique"> technique</a> </p> <a href="https://publications.waset.org/abstracts/31058/speech-impact-realization-via-manipulative-argumentation-techniques-in-modern-american-political-discourse" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">508</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1917</span> Speech Enhancement Using Kalman Filter in Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eng.%20Alaa%20K.%20Satti%20Salih">Eng. Alaa K. Satti Salih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autoregressive%20process" title="autoregressive process">autoregressive process</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20speech" title=" noise speech"> noise speech</a> </p> <a href="https://publications.waset.org/abstracts/7182/speech-enhancement-using-kalman-filter-in-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1916</span> Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Ajgou">R. Ajgou</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sbaa"> S. Sbaa</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghendir"> S. Ghendir</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Chemsa"> A. Chemsa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Taleb-Ahmed"> A. Taleb-Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title="speech enhancement">speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=pesq" title=" pesq"> pesq</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a> </p> <a href="https://publications.waset.org/abstracts/31102/comparative-methods-for-speech-enhancement-and-the-effects-on-text-independent-speaker-identification-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1915</span> Freedom of Speech and Involvement in Hatred Speech on Social Media Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sara%20Chinnasamy">Sara Chinnasamy</a>, <a href="https://publications.waset.org/abstracts/search?q=Michelle%20Gun"> Michelle Gun</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Adnan%20Hashim"> M. Adnan Hashim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Federal Constitution guarantees Malaysians the right to free speech and expression; yet hatred speech can be commonly found on social media platforms such as Facebook, Twitter, and Instagram. In Malaysia social media sphere, most hatred speech involves religion, race and politics. Recent cases of racial attacks on social media have created social tensions among Malaysians. Many Malaysians always argue on their rights to freedom of speech. However, there are laws that limit their expression to the public and protecting social media users from being a victim of hate speech. This paper aims to explore the attitude and involvement of Malaysian netizens towards freedom of speech and hatred speech on social media. It also examines the relationship between involvement in hatred speech among Malaysian netizens and attitude towards freedom of speech. For most Malaysians, practicing total freedom of speech in the open is unthinkable. As a result, the best channel to articulate their feelings and opinions liberally is the internet. With the advent of the internet medium, more and more Malaysians are conveying their viewpoints using the various internet channels although sensitivity of the audience is seldom taken into account. Consequently, this situation has led to pockets of social disharmony among the citizens. Although this unhealthy activity is denounced by the authority, netizens are generally of the view that they have the right to write anything they want. Using the quantitative method, survey was conducted among Malaysians aged between 18 and 50 years who are active social media users. Results from the survey reveal that despite a weak relationship level between hatred speech involvement on social media and attitude towards freedom of speech, the association is still considerably significant. As such, it can be safely presumed that hatred speech on social media occurs due to the freedom of speech that exists by way of social media channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20speech" title="freedom of speech">freedom of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=hatred%20speech" title=" hatred speech"> hatred speech</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=Malaysia" title=" Malaysia"> Malaysia</a>, <a href="https://publications.waset.org/abstracts/search?q=netizens" title=" netizens"> netizens</a> </p> <a href="https://publications.waset.org/abstracts/72863/freedom-of-speech-and-involvement-in-hatred-speech-on-social-media-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1914</span> Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Nhan%20Nguyen">Van Nhan Nguyen</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20Holone"> Harald Holone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=asr" title=" asr"> asr</a>, <a href="https://publications.waset.org/abstracts/search?q=air%20traffic%20control" title=" air traffic control"> air traffic control</a>, <a href="https://publications.waset.org/abstracts/search?q=atc" title=" atc"> atc</a> </p> <a href="https://publications.waset.org/abstracts/31004/possibilities-challenges-and-the-state-of-the-art-of-automatic-speech-recognition-in-air-traffic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1913</span> Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nazaket%20Gazieva">Nazaket Gazieva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phonogram" title="phonogram">phonogram</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20characteristics" title=" temporal characteristics"> temporal characteristics</a>, <a href="https://publications.waset.org/abstracts/search?q=fundamental%20frequency" title=" fundamental frequency"> fundamental frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=biometric%20fingerprints" title=" biometric fingerprints"> biometric fingerprints</a> </p> <a href="https://publications.waset.org/abstracts/110332/minimum-data-of-a-speech-signal-as-special-indicators-of-identification-in-phonoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1912</span> Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Krishna%20Mohan%20Bathula">Krishna Mohan Bathula</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatou%20Bintou%20Loucoubar"> Fatou Bintou Loucoubar</a>, <a href="https://publications.waset.org/abstracts/search?q=FNU%20Kaleemunnisa"> FNU Kaleemunnisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Christelle%20Scharff"> Christelle Scharff</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Anthony%20De%20Castro"> Mark Anthony De Castro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20voice%20response" title=" interactive voice response"> interactive voice response</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20response%20recognition" title=" voice response recognition"> voice response recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=wolof%20word%20classification" title=" wolof word classification"> wolof word classification</a> </p> <a href="https://publications.waset.org/abstracts/150305/wolof-voice-response-recognition-system-a-deep-learning-model-for-wolof-audio-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150305.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">116</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1911</span> Intervention of Self-Limiting L1 Inner Speech during L2 Presentations: A Study of Bangla-English Bilinguals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdul%20Wahid">Abdul Wahid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Inner speech, also known as verbal thinking, self-talk or private speech, is characterized by the subjective language experience in the absence of overt or audible speech. It is a psychological form of verbal activity which is being rehearsed without the articulation of any sound wave. In Psychology, self-limiting speech means the type of speech which contains information that inhibits the development of the self. People, in most cases, experience inner speech in their first language. It is very frequent in Bangladesh where the Bangla (L1) speaking students lose track of speech during their presentations in English (L2). This paper investigates into the long pauses (more than 0.4 seconds long) in English (L2) presentations by Bangla speaking students (18-21 year old) and finds the intervention of Bangla (L1) inner speech as one of its causes. The overt speeches of the presenters are placed on Audacity Audio Editing software where the length of pauses are measured in milliseconds. Varieties of inner speech questionnaire (VISQ) have been conducted randomly amongst the participants out of whom 20 were selected who have similar phenomenology of inner speech. They have been interviewed to describe the type and content of the voices that went on in their head during the long pauses. The qualitative interview data are then codified and converted into quantitative data. It was observed that in more than 80% cases students experience self-limiting inner speech/self-talk during their unwanted pauses in L2 presentations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bangla-English%20Bilinguals" title="Bangla-English Bilinguals">Bangla-English Bilinguals</a>, <a href="https://publications.waset.org/abstracts/search?q=inner%20speech" title=" inner speech"> inner speech</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20intervention%20in%20bilingualism" title=" L1 intervention in bilingualism"> L1 intervention in bilingualism</a>, <a href="https://publications.waset.org/abstracts/search?q=motor%20schema" title=" motor schema"> motor schema</a>, <a href="https://publications.waset.org/abstracts/search?q=pauses" title=" pauses"> pauses</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20loop" title=" phonological loop"> phonological loop</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20store" title=" phonological store"> phonological store</a>, <a href="https://publications.waset.org/abstracts/search?q=working%20memory" title=" working memory"> working memory</a> </p> <a href="https://publications.waset.org/abstracts/128980/intervention-of-self-limiting-l1-inner-speech-during-l2-presentations-a-study-of-bangla-english-bilinguals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1910</span> Performance Evaluation of Acoustic-Spectrographic Voice Identification Method in Native and Non-Native Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Krasnova">E. Krasnova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Shchemelinin"> V. Shchemelinin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with acoustic-spectrographic voice identification method in terms of its performance in non-native language speech. Performance evaluation is conducted by comparing the result of the analysis of recordings containing native language speech with recordings that contain foreign language speech. Our research is based on Tajik and Russian speech of Tajik native speakers due to the character of the criminal situation with drug trafficking. We propose a pilot experiment that represents a primary attempt enter the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title="speaker identification">speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic-spectrographic%20method" title=" acoustic-spectrographic method"> acoustic-spectrographic method</a>, <a href="https://publications.waset.org/abstracts/search?q=non-native%20speech" title=" non-native speech"> non-native speech</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20evaluation" title=" performance evaluation"> performance evaluation</a> </p> <a href="https://publications.waset.org/abstracts/12496/performance-evaluation-of-acoustic-spectrographic-voice-identification-method-in-native-and-non-native-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1909</span> Automatic Segmentation of the Clean Speech Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Ben%20Messaoud">M. A. Ben Messaoud</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Bouzid"> A. Bouzid</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Ellouze"> N. Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The multi-scale product is based on making the product of the speech wavelet transform coefficients at three successive dyadic scales. We have evaluated our method on the Keele database. Experimental results show the effectiveness of our method presenting a good performance. It shows that the two simple features can find word boundaries, and extracted the segments of the clean speech. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiscale%20product" title="multiscale product">multiscale product</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20centroid" title=" spectral centroid"> spectral centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20segmentation" title=" speech segmentation"> speech segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=zero%20crossings%20rate" title=" zero crossings rate"> zero crossings rate</a> </p> <a href="https://publications.waset.org/abstracts/17566/automatic-segmentation-of-the-clean-speech-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">500</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1908</span> The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fawaz%20S.%20Al-Anzi">Fawaz S. Al-Anzi</a>, <a href="https://publications.waset.org/abstracts/search?q=Dia%20AbuZeina"> Dia AbuZeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title="speech recognition">speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20features" title=" acoustic features"> acoustic features</a>, <a href="https://publications.waset.org/abstracts/search?q=mel%20frequency" title=" mel frequency"> mel frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20coefficients" title=" cepstral coefficients"> cepstral coefficients</a> </p> <a href="https://publications.waset.org/abstracts/78382/the-capacity-of-mel-frequency-cepstral-coefficients-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=64">64</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=65">65</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>