CINXE.COM
Search results for: intelligent speech interface
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: intelligent speech interface</title> <meta name="description" content="Search results for: intelligent speech interface"> <meta name="keywords" content="intelligent speech interface"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="intelligent speech interface" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="intelligent speech interface"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2910</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: intelligent speech interface</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2910</span> Unsupervised Assistive and Adaptative Intelligent Agent in Smart Enviroment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sebasti%C3%A3o%20Pais">Sebastião Pais</a>, <a href="https://publications.waset.org/abstracts/search?q=Jo%C3%A3o%20Casal"> João Casal</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Ponciano"> Ricardo Ponciano</a>, <a href="https://publications.waset.org/abstracts/search?q=S%C3%A9rgio%20Loren%C3%A7o"> Sérgio Lorenço</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The adaptation paradigm is a basic defining feature for pervasive computing systems. Adaptation systems must work efficiently in a smart environment while providing suitable information relevant to the user system interaction. The key objective is to deduce the information needed information changes. Therefore relying on fixed operational models would be inappropriate. This paper presents a study on developing an Intelligent Personal Assistant to assist the user in interacting with their Smart Environment. We propose an Unsupervised and Language-Independent Adaptation through Intelligent Speech Interface and a set of methods of Acquiring Knowledge, namely Semantic Similarity and Unsupervised Learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20personal%20assistants" title="intelligent personal assistants">intelligent personal assistants</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface" title=" intelligent speech interface"> intelligent speech interface</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20learning" title=" unsupervised learning"> unsupervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=language-independent" title=" language-independent"> language-independent</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20acquisition" title=" knowledge acquisition"> knowledge acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=association%20measures" title=" association measures"> association measures</a>, <a href="https://publications.waset.org/abstracts/search?q=symmetric%20word%20similarities" title=" symmetric word similarities"> symmetric word similarities</a>, <a href="https://publications.waset.org/abstracts/search?q=attributional%20word%20similarities" title=" attributional word similarities"> attributional word similarities</a> </p> <a href="https://publications.waset.org/abstracts/21135/unsupervised-assistive-and-adaptative-intelligent-agent-in-smart-enviroment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21135.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">560</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2909</span> Unsupervised Assistive and Adaptive Intelligent Agent in Smart Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sebasti%C3%A3o%20Pais">Sebastião Pais</a>, <a href="https://publications.waset.org/abstracts/search?q=Jo%C3%A3o%20Casal"> João Casal</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Ponciano"> Ricardo Ponciano</a>, <a href="https://publications.waset.org/abstracts/search?q=S%C3%A9rgio%20Louren%C3%A7o"> Sérgio Lourenço</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The adaptation paradigm is a basic defining feature for pervasive computing systems. Adaptation systems must work efficiently in smart environment while providing suitable information relevant to the user system interaction. The key objective is to deduce the information needed information changes. Therefore, relying on fixed operational models would be inappropriate. This paper presents a study on developing a Intelligent Personal Assistant to assist the user in interacting with their Smart Environment. We propose a Unsupervised and Language-Independent Adaptation through Intelligent Speech Interface and a set of methods of Acquiring Knowledge, namely Semantic Similarity and Unsupervised Learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20personal%20assistants" title="intelligent personal assistants">intelligent personal assistants</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface" title=" intelligent speech interface"> intelligent speech interface</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20learning" title=" unsupervised learning"> unsupervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=language-independent" title=" language-independent"> language-independent</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20acquisition" title=" knowledge acquisition"> knowledge acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=association%20measures" title=" association measures"> association measures</a>, <a href="https://publications.waset.org/abstracts/search?q=symmetric%20word%20similarities" title=" symmetric word similarities"> symmetric word similarities</a>, <a href="https://publications.waset.org/abstracts/search?q=attributional%20word%20similarities" title=" attributional word similarities"> attributional word similarities</a> </p> <a href="https://publications.waset.org/abstracts/21136/unsupervised-assistive-and-adaptive-intelligent-agent-in-smart-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21136.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">643</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2908</span> Robust Noisy Speech Identification Using Frame Classifier Derived Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Punnoose%20A.%20K.">Punnoose A. K.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for identifying noisy speech recording using a multi-layer perception (MLP) trained to predict phonemes from acoustic features. Characteristics of the MLP posteriors are explored for clean speech and noisy speech at the frame level. Appropriate density functions are used to fit the softmax probability of the clean and noisy speech. A function that takes into account the ratio of the softmax probability density of noisy speech to clean speech is formulated. These phoneme independent scoring is weighted using a phoneme-specific weightage to make the scoring more robust. Simple thresholding is used to identify the noisy speech recording from the clean speech recordings. The approach is benchmarked on standard databases, with a focus on precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=noisy%20speech%20identification" title="noisy speech identification">noisy speech identification</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20pre-processing" title=" speech pre-processing"> speech pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20robustness" title=" noise robustness"> noise robustness</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20engineering" title=" feature engineering"> feature engineering</a> </p> <a href="https://publications.waset.org/abstracts/144694/robust-noisy-speech-identification-using-frame-classifier-derived-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144694.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2907</span> An Analysis of Illocutioary Act in Martin Luther King Jr.'s Propaganda Speech Entitled 'I Have a Dream'</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahgfirah%20Firdaus%20Soberatta">Mahgfirah Firdaus Soberatta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language cannot be separated from human life. Humans use language to convey ideas, thoughts, and feelings. We can use words for different things for example like asserted, advising, promise, give opinions, hopes, etc. Propaganda is an attempt which seeks to obtain stable behavior to adopt everyone to his everyday life. It also controls the thoughts and attitudes of individuals in social settings permanent. In this research, the writer will discuss about the speech act in a propaganda speech delivered by Martin Luther King Jr. in Washington at Lincoln Memorial on August 28, 1963. 'I Have a Dream' is a public speech delivered by American civil rights activist MLK, he calls from an end to racism in USA. In this research, the writer uses Searle theory to analyze the types of illocutionary speech act that used by Martin Luther King Jr. in his propaganda speech. In this research, the writer uses a qualitative method described in descriptive, because the research wants to describe and explain the types of illocutionary speech acts used by Martin Luther King Jr. in his propaganda speech. The findings indicate that there are five types of speech acts in Martin Luther King Jr. speech. MLK also used direct speech and indirect speech in his propaganda speech. However, direct speech is the dominant speech act that MLK used in his propaganda speech. It is hoped that this research is useful for the readers to enrich their knowledge in a particular field of pragmatic speech acts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20act" title="speech act">speech act</a>, <a href="https://publications.waset.org/abstracts/search?q=propaganda" title=" propaganda"> propaganda</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Luther%20King%20Jr." title=" Martin Luther King Jr."> Martin Luther King Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a> </p> <a href="https://publications.waset.org/abstracts/45649/an-analysis-of-illocutioary-act-in-martin-luther-king-jrs-propaganda-speech-entitled-i-have-a-dream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">441</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2906</span> Applying an Automatic Speech Intelligent System to the Health Care of Patients Undergoing Long-Term Hemodialysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kuo-Kai%20Lin">Kuo-Kai Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Po-Lun%20Chang"> Po-Lun Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Research Background and Purpose: Following the development of the Internet and multimedia, the Internet and information technology have become crucial avenues of modern communication and knowledge acquisition. The advantages of using mobile devices for learning include making learning borderless and accessible. Mobile learning has become a trend in disease management and health promotion in recent years. End-stage renal disease (ESRD) is an irreversible chronic disease, and patients who do not receive kidney transplants can only rely on hemodialysis or peritoneal dialysis to survive. Due to the complexities in caregiving for patients with ESRD that stem from their advanced age and other comorbidities, the patients’ incapacity of self-care leads to an increase in the need to rely on their families or primary caregivers, although whether the primary caregivers adequately understand and implement patient care is a topic of concern. Therefore, this study explored whether primary caregivers’ health care provisions can be improved through the intervention of an automatic speech intelligent system, thereby improving the objective health outcomes of patients undergoing long-term dialysis. Method: This study developed an automatic speech intelligent system with healthcare functions such as health information voice prompt, two-way feedback, real-time push notification, and health information delivery. Convenience sampling was adopted to recruit eligible patients from a hemodialysis center at a regional teaching hospital as research participants. A one-group pretest-posttest design was adopted. Descriptive and inferential statistics were calculated from the demographic information collected from questionnaires answered by patients and primary caregivers, and from a medical record review, a health care scale (recorded six months before and after the implementation of intervention measures), a subjective health assessment, and a report of objective physiological indicators. The changes in health care behaviors, subjective health status, and physiological indicators before and after the intervention of the proposed automatic speech intelligent system were then compared. Conclusion and Discussion: The preliminary automatic speech intelligent system developed in this study was tested with 20 pretest patients at the recruitment location, and their health care capacity scores improved from 59.1 to 72.8; comparisons through a nonparametric test indicated a significant difference (p < .01). The average score for their subjective health assessment rose from 2.8 to 3.3. A survey of their objective physiological indicators discovered that the compliance rate for the blood potassium level was the most significant indicator; its average compliance rate increased from 81% to 94%. The results demonstrated that this automatic speech intelligent system yielded a higher efficacy for chronic disease care than did conventional health education delivered by nurses. Therefore, future efforts will continue to increase the number of recruited patients and to refine the intelligent system. Future improvements to the intelligent system can be expected to enhance its effectiveness even further. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20intelligent%20system%20for%20health%20care" title="automatic speech intelligent system for health care">automatic speech intelligent system for health care</a>, <a href="https://publications.waset.org/abstracts/search?q=primary%20caregiver" title=" primary caregiver"> primary caregiver</a>, <a href="https://publications.waset.org/abstracts/search?q=long-term%20hemodialysis" title=" long-term hemodialysis"> long-term hemodialysis</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20care%20capabilities" title=" health care capabilities"> health care capabilities</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20outcomes" title=" health outcomes"> health outcomes</a> </p> <a href="https://publications.waset.org/abstracts/87541/applying-an-automatic-speech-intelligent-system-to-the-health-care-of-patients-undergoing-long-term-hemodialysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87541.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">110</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2905</span> Voice Commands Recognition of Mentor Robot in Noisy Environment Using HTK</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khenfer-Koummich%20Fatma">Khenfer-Koummich Fatma</a>, <a href="https://publications.waset.org/abstracts/search?q=Hendel%20Fatiha"> Hendel Fatiha</a>, <a href="https://publications.waset.org/abstracts/search?q=Mesbahi%20Larbi"> Mesbahi Larbi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> this paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a man-machine interface with a voice recognition system that allows the operator to tele-operate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands spoken in two languages: French and Arabic. The recognition rate obtained is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equal to 30 db, the Arabic speech recognition rate is 69% and 80% for French speech recognition rate. This can be explained by the ability of phonetic context of each speech when the noise is added. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20command" title="voice command">voice command</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM" title=" HMM"> HMM</a>, <a href="https://publications.waset.org/abstracts/search?q=TIMIT" title=" TIMIT"> TIMIT</a>, <a href="https://publications.waset.org/abstracts/search?q=noise" title=" noise"> noise</a>, <a href="https://publications.waset.org/abstracts/search?q=HTK" title=" HTK"> HTK</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic" title=" Arabic"> Arabic</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a> </p> <a href="https://publications.waset.org/abstracts/24454/voice-commands-recognition-of-mentor-robot-in-noisy-environment-using-htk" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24454.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2904</span> The Online Advertising Speech that Effect to the Thailand Internet User Decision Making</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Panprae%20Bunyapukkna">Panprae Bunyapukkna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated figures of speech used in fragrance advertising captions on the Internet. The objectives of the study were to find out the frequencies of figures of speech in fragrance advertising captions and the types of figures of speech most commonly applied in captions. The relation between figures of speech and fragrance was also examined in order to analyze how figures of speech were used to represent fragrance. Thirty-five fragrance advertisements were randomly selected from the Internet. Content analysis was applied in order to consider the relation between figures of speech and fragrance. The results showed that figures of speech were found in almost every fragrance advertisement except one advertisement of Lancôme. Thirty-four fragrance advertising captions used at least one kind of figure of speech. Metaphor was most frequently found and also most frequently applied in fragrance advertising captions, followed by alliteration, rhyme, simile and personification, and hyperbole respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advertising%20speech" title="advertising speech">advertising speech</a>, <a href="https://publications.waset.org/abstracts/search?q=fragrance%20advertisements" title=" fragrance advertisements"> fragrance advertisements</a>, <a href="https://publications.waset.org/abstracts/search?q=figures%20of%20speech" title=" figures of speech"> figures of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor" title=" metaphor"> metaphor</a> </p> <a href="https://publications.waset.org/abstracts/44259/the-online-advertising-speech-that-effect-to-the-thailand-internet-user-decision-making" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2903</span> TeleMe Speech Booster: Web-Based Speech Therapy and Training Program for Children with Articulation Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Treerattanaphan">C. Treerattanaphan</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Boonpramuk"> P. Boonpramuk</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Singla"> P. Singla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Frequent, continuous speech training has proven to be a necessary part of a successful speech therapy process, but constraints of traveling time and employment dispensation become key obstacles especially for individuals living in remote areas or for dependent children who have working parents. In order to ameliorate speech difficulties with ample guidance from speech therapists, a website has been developed that supports speech therapy and training for people with articulation disorders in the standard Thai language. This web-based program has the ability to record speech training exercises for each speech trainee. The records will be stored in a database for the speech therapist to investigate, evaluate, compare and keep track of all trainees’ progress in detail. Speech trainees can request live discussions via video conference call when needed. Communication through this web-based program facilitates and reduces training time in comparison to walk-in training or appointments. This type of training also allows people with articulation disorders to practice speech lessons whenever or wherever is convenient for them, which can lead to a more regular training processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=web-based%20remote%20training%20program" title="web-based remote training program">web-based remote training program</a>, <a href="https://publications.waset.org/abstracts/search?q=Thai%20speech%20therapy" title=" Thai speech therapy"> Thai speech therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=articulation%20disorders" title=" articulation disorders"> articulation disorders</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20booster" title=" speech booster"> speech booster</a> </p> <a href="https://publications.waset.org/abstracts/13916/teleme-speech-booster-web-based-speech-therapy-and-training-program-for-children-with-articulation-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2902</span> Development of Non-Intrusive Speech Evaluation Measure Using S-Transform and Light-Gbm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tusar%20Kanti%20Dash">Tusar Kanti Dash</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganapati%20Panda"> Ganapati Panda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The evaluation of speech quality and intelligence is critical to the overall effectiveness of the Speech Enhancement Algorithms. Several intrusive and non-intrusive measures are employed to calculate these parameters. Non-Intrusive Evaluation is most challenging as, very often, the reference clean speech data is not available. In this paper, a novel non-intrusive speech evaluation measure is proposed using audio features derived from the Stockwell transform. These features are used with the Light Gradient Boosting Machine for the effective prediction of speech quality and intelligibility. The proposed model is analyzed using noisy and reverberant speech from four databases, and the results are compared with the standard Intrusive Evaluation Measures. It is observed from the comparative analysis that the proposed model is performing better than the standard Non-Intrusive models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=non-Intrusive%20speech%20evaluation" title="non-Intrusive speech evaluation">non-Intrusive speech evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=S-transform" title=" S-transform"> S-transform</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20GBM" title=" light GBM"> light GBM</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20quality" title=" speech quality"> speech quality</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20intelligibility" title=" and intelligibility"> and intelligibility</a> </p> <a href="https://publications.waset.org/abstracts/139626/development-of-non-intrusive-speech-evaluation-measure-using-s-transform-and-light-gbm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139626.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2901</span> Annexation (Al-Iḍāfah) in Thariq bin Ziyad’s Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Annisa%20D.%20Febryandini">Annisa D. Febryandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Annexation is a typical construction that commonly used in Arabic language. The use of the construction appears in Arabic speech such as the speech of Thariq bin Ziyad. The speech as one of the most famous speeches in the history of Islam uses many annexations. This qualitative research paper uses the secondary data by library method. Based on the data, this paper concludes that the speech has two basic structures with some variations and has some grammatical relationship. Different from the other researches that identify the speech in sociology field, the speech in this paper will be analyzed in linguistic field to take a look at the structure of its annexation as well as the grammatical relationship. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annexation" title="annexation">annexation</a>, <a href="https://publications.waset.org/abstracts/search?q=Thariq%20bin%20Ziyad" title=" Thariq bin Ziyad"> Thariq bin Ziyad</a>, <a href="https://publications.waset.org/abstracts/search?q=grammatical%20relationship" title=" grammatical relationship"> grammatical relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20syntax" title=" Arabic syntax"> Arabic syntax</a> </p> <a href="https://publications.waset.org/abstracts/72847/annexation-al-iafah-in-thariq-bin-ziyads-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2900</span> Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hai%20Quang%20Hong%20Dam">Hai Quang Hong Dam</a>, <a href="https://publications.waset.org/abstracts/search?q=Hai%20Ho"> Hai Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Minh%20Hoang%20Le%20Ngo"> Minh Hoang Le Ngo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20speech%20separation" title="blind speech separation">blind speech separation</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detector" title=" voice activity detector"> voice activity detector</a>, <a href="https://publications.waset.org/abstracts/search?q=SRP-PHAT" title=" SRP-PHAT"> SRP-PHAT</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20beamformer" title=" optimal beamformer"> optimal beamformer</a> </p> <a href="https://publications.waset.org/abstracts/53263/blind-speech-separation-using-srp-phat-localization-and-optimal-beamformer-in-two-speaker-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2899</span> Recognition of Voice Commands of Mentor Robot in Noisy Environment Using Hidden Markov Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khenfer%20Koummich%20Fatma">Khenfer Koummich Fatma</a>, <a href="https://publications.waset.org/abstracts/search?q=Hendel%20Fatiha"> Hendel Fatiha</a>, <a href="https://publications.waset.org/abstracts/search?q=Mesbahi%20Larbi"> Mesbahi Larbi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a human-machine interface with a voice recognition system that allows the operator to teleoperate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands pronounced in two languages: French and Arabic. The obtained recognition rate is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equals 30 dB, in this case; the Arabic speech recognition rate is 69%, and the French speech recognition rate is 80%. This can be explained by the ability of phonetic context of each speech when the noise is added. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabic%20speech%20recognition" title="Arabic speech recognition">Arabic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Hidden%20Markov%20Model%20%28HMM%29" title=" Hidden Markov Model (HMM)"> Hidden Markov Model (HMM)</a>, <a href="https://publications.waset.org/abstracts/search?q=HTK" title=" HTK"> HTK</a>, <a href="https://publications.waset.org/abstracts/search?q=noise" title=" noise"> noise</a>, <a href="https://publications.waset.org/abstracts/search?q=TIMIT" title=" TIMIT"> TIMIT</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20command" title=" voice command"> voice command</a> </p> <a href="https://publications.waset.org/abstracts/67988/recognition-of-voice-commands-of-mentor-robot-in-noisy-environment-using-hidden-markov-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67988.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">385</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2898</span> Human Computer Interaction Using Computer Vision and Speech Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreyansh%20Jain%20Jeetmal">Shreyansh Jain Jeetmal</a>, <a href="https://publications.waset.org/abstracts/search?q=Shobith%20P.%20Chadaga"> Shobith P. Chadaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Shreyas%20H.%20Srinivas"> Shreyas H. Srinivas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Internet of Things (IoT) is seen as the next major step in the ongoing revolution in the Information Age. It is predicted that in the near future billions of embedded devices will be communicating with each other to perform a plethora of tasks with or without human intervention. One of the major ongoing hotbed of research activity in IoT is Human Computer Interaction (HCI). HCI is used to facilitate communication between an intelligent system and a user. An intelligent system typically comprises of a system consisting of various sensors, actuators and embedded controllers which communicate with each other to monitor data collected from the environment. Communication by the user to the system is typically done using voice. One of the major ongoing applications of HCI is in home automation as a personal assistant. The prime objective of our project is to implement a use case of HCI for home automation. Our system is designed to detect and recognize the users and personalize the appliances in the house according to their individual preferences. Our HCI system is also capable of speaking with the user when certain commands are spoken such as searching on the web for information and controlling appliances. Our system can also monitor the environment in the house such as air quality and gas leakages for added safety. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title="human computer interaction">human computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=internet%20of%20things" title=" internet of things"> internet of things</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20networks" title=" sensor networks"> sensor networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20to%20text" title=" speech to text"> speech to text</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20to%20speech" title=" text to speech"> text to speech</a>, <a href="https://publications.waset.org/abstracts/search?q=android" title=" android"> android</a> </p> <a href="https://publications.waset.org/abstracts/73991/human-computer-interaction-using-computer-vision-and-speech-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73991.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">362</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2897</span> Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nassib%20Abdallah">Nassib Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Pierre%20Chauvet"> Pierre Chauvet</a>, <a href="https://publications.waset.org/abstracts/search?q=Abd%20El%20Salam%20Hajjar"> Abd El Salam Hajjar</a>, <a href="https://publications.waset.org/abstracts/search?q=Bassam%20Daya"> Bassam Daya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain-computer%20interface" title="brain-computer interface">brain-computer interface</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=electroencephalography" title=" electroencephalography"> electroencephalography</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG" title=" EEG"> EEG</a>, <a href="https://publications.waset.org/abstracts/search?q=wernicke%20area" title=" wernicke area"> wernicke area</a> </p> <a href="https://publications.waset.org/abstracts/86773/optimized-brain-computer-interface-system-for-unspoken-speech-recognition-role-of-wernicke-area" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86773.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">271</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2896</span> Effect of Signal Acquisition Procedure on Imagined Speech Classification Accuracy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.R%20Asghari%20Bejestani">M.R Asghari Bejestani</a>, <a href="https://publications.waset.org/abstracts/search?q=Gh.%20R.%20Mohammad%20Khani"> Gh. R. Mohammad Khani</a>, <a href="https://publications.waset.org/abstracts/search?q=V.R.%20Nafisi"> V.R. Nafisi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Imagined speech recognition is one of the most interesting approaches to BCI development and a lot of works have been done in this area. Many different experiments have been designed and hundreds of combinations of feature extraction methods and classifiers have been examined. Reported classification accuracies range from the chance level to more than 90%. Based on non-stationary nature of brain signals, we have introduced 3 classification modes according to time difference in inter and intra-class samples. The modes can explain the diversity of reported results and predict the range of expected classification accuracies from the brain signal accusation procedure. In this paper, a few samples are illustrated by inspecting results of some previous works. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20computer%20interface" title="brain computer interface">brain computer interface</a>, <a href="https://publications.waset.org/abstracts/search?q=silent%20talk" title=" silent talk"> silent talk</a>, <a href="https://publications.waset.org/abstracts/search?q=imagined%20speech" title=" imagined speech"> imagined speech</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a> </p> <a href="https://publications.waset.org/abstracts/154214/effect-of-signal-acquisition-procedure-on-imagined-speech-classification-accuracy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154214.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2895</span> Speech Impact Realization via Manipulative Argumentation Techniques in Modern American Political Discourse</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zarine%20Avetisyan">Zarine Avetisyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Paper presents the discussion of scholars concerning speech impact, peculiarities of its realization, speech strategies, and techniques. Departing from the viewpoints of many prominent linguists, the paper suggests manipulative argumentation be viewed as a most pervasive speech strategy with a certain set of techniques which are to be found in modern American political discourse. The precedence of their occurrence allows us to regard them as pragmatic patterns of speech impact realization in effective public speaking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20impact" title="speech impact">speech impact</a>, <a href="https://publications.waset.org/abstracts/search?q=manipulative%20argumentation" title=" manipulative argumentation"> manipulative argumentation</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20discourse" title=" political discourse"> political discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=technique" title=" technique"> technique</a> </p> <a href="https://publications.waset.org/abstracts/31058/speech-impact-realization-via-manipulative-argumentation-techniques-in-modern-american-political-discourse" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">508</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2894</span> Speech Enhancement Using Kalman Filter in Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eng.%20Alaa%20K.%20Satti%20Salih">Eng. Alaa K. Satti Salih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autoregressive%20process" title="autoregressive process">autoregressive process</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20speech" title=" noise speech"> noise speech</a> </p> <a href="https://publications.waset.org/abstracts/7182/speech-enhancement-using-kalman-filter-in-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2893</span> Application of Intelligent City and Hierarchy Intelligent Buildings in Kuala Lumpur</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jalalludin%20Abdul%20Malek">Jalalludin Abdul Malek</a>, <a href="https://publications.waset.org/abstracts/search?q=Zurinah%20Tahir"> Zurinah Tahir </a> </p> <p class="card-text"><strong>Abstract:</strong></p> When the Multimedia Super Corridor (MSC) was launched in 1995, it became the catalyst for the implementation of the intelligent city concept, an area that covers about 15 x 50 kilometres from Kuala Lumpur City Centre (KLCC), Putrajaya and Kuala Lumpur International Airport (KLIA). The concept of intelligent city means that the city has an advanced infrastructure and infostructure such as information technology, advanced telecommunication systems, electronic technology and mechanical technology to be utilized for the development of urban elements such as industries, health, services, transportation and communications. For example, the Golden Triangle of Kuala Lumpur has also many intelligent buildings developed by the private sector such as the KLCC Tower to implement the intelligent city concept. Consequently, the intelligent buildings in the Golden Triangle can be linked directly to the Putrajaya Intelligent City and Cyberjaya Intelligent City within the confines of the MSC. However, the reality of the situation is that there are not many intelligent buildings within the Golden Triangle Kuala Lumpur scope which can be considered of high-standard intelligent buildings as referred to by the Intelligence Quotient (IQ) building standard. This increases the need to implement the real ‘intelligent city’ concept. This paper aims to show the strengths and weaknesses of the intelligent buildings in the Golden Triangle by taking into account aspects of 'intelligence' in the areas of technology and infrastructure of buildings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20city%20concepts" title="intelligent city concepts">intelligent city concepts</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20building" title=" intelligent building"> intelligent building</a>, <a href="https://publications.waset.org/abstracts/search?q=Golden%20Triangle" title=" Golden Triangle"> Golden Triangle</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuala%20Lumpur" title=" Kuala Lumpur "> Kuala Lumpur </a> </p> <a href="https://publications.waset.org/abstracts/57007/application-of-intelligent-city-and-hierarchy-intelligent-buildings-in-kuala-lumpur" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57007.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2892</span> Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Ajgou">R. Ajgou</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sbaa"> S. Sbaa</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghendir"> S. Ghendir</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Chemsa"> A. Chemsa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Taleb-Ahmed"> A. Taleb-Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title="speech enhancement">speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=pesq" title=" pesq"> pesq</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a> </p> <a href="https://publications.waset.org/abstracts/31102/comparative-methods-for-speech-enhancement-and-the-effects-on-text-independent-speaker-identification-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2891</span> Freedom of Speech and Involvement in Hatred Speech on Social Media Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sara%20Chinnasamy">Sara Chinnasamy</a>, <a href="https://publications.waset.org/abstracts/search?q=Michelle%20Gun"> Michelle Gun</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Adnan%20Hashim"> M. Adnan Hashim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Federal Constitution guarantees Malaysians the right to free speech and expression; yet hatred speech can be commonly found on social media platforms such as Facebook, Twitter, and Instagram. In Malaysia social media sphere, most hatred speech involves religion, race and politics. Recent cases of racial attacks on social media have created social tensions among Malaysians. Many Malaysians always argue on their rights to freedom of speech. However, there are laws that limit their expression to the public and protecting social media users from being a victim of hate speech. This paper aims to explore the attitude and involvement of Malaysian netizens towards freedom of speech and hatred speech on social media. It also examines the relationship between involvement in hatred speech among Malaysian netizens and attitude towards freedom of speech. For most Malaysians, practicing total freedom of speech in the open is unthinkable. As a result, the best channel to articulate their feelings and opinions liberally is the internet. With the advent of the internet medium, more and more Malaysians are conveying their viewpoints using the various internet channels although sensitivity of the audience is seldom taken into account. Consequently, this situation has led to pockets of social disharmony among the citizens. Although this unhealthy activity is denounced by the authority, netizens are generally of the view that they have the right to write anything they want. Using the quantitative method, survey was conducted among Malaysians aged between 18 and 50 years who are active social media users. Results from the survey reveal that despite a weak relationship level between hatred speech involvement on social media and attitude towards freedom of speech, the association is still considerably significant. As such, it can be safely presumed that hatred speech on social media occurs due to the freedom of speech that exists by way of social media channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20speech" title="freedom of speech">freedom of speech</a>, <a href="https://publications.waset.org/abstracts/search?q=hatred%20speech" title=" hatred speech"> hatred speech</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=Malaysia" title=" Malaysia"> Malaysia</a>, <a href="https://publications.waset.org/abstracts/search?q=netizens" title=" netizens"> netizens</a> </p> <a href="https://publications.waset.org/abstracts/72863/freedom-of-speech-and-involvement-in-hatred-speech-on-social-media-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2890</span> Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Nhan%20Nguyen">Van Nhan Nguyen</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20Holone"> Harald Holone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=asr" title=" asr"> asr</a>, <a href="https://publications.waset.org/abstracts/search?q=air%20traffic%20control" title=" air traffic control"> air traffic control</a>, <a href="https://publications.waset.org/abstracts/search?q=atc" title=" atc"> atc</a> </p> <a href="https://publications.waset.org/abstracts/31004/possibilities-challenges-and-the-state-of-the-art-of-automatic-speech-recognition-in-air-traffic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2889</span> Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nazaket%20Gazieva">Nazaket Gazieva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phonogram" title="phonogram">phonogram</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20characteristics" title=" temporal characteristics"> temporal characteristics</a>, <a href="https://publications.waset.org/abstracts/search?q=fundamental%20frequency" title=" fundamental frequency"> fundamental frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=biometric%20fingerprints" title=" biometric fingerprints"> biometric fingerprints</a> </p> <a href="https://publications.waset.org/abstracts/110332/minimum-data-of-a-speech-signal-as-special-indicators-of-identification-in-phonoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2888</span> Intelligent Tutor Using Adaptive Learning to Partial Discharges with Virtual Reality Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hern%C3%A1ndez%20Yasmi%CC%81n">Hernández Yasmín</a>, <a href="https://publications.waset.org/abstracts/search?q=Ochoa%20Alberto"> Ochoa Alberto</a>, <a href="https://publications.waset.org/abstracts/search?q=Hurtado%20Diego"> Hurtado Diego</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this study is developing an intelligent tutoring system for electrical operators training with virtual reality systems at the laboratory center of partials discharges LAPEM. The electrical domain requires efficient and well trained personnel, due to the danger involved in the partials discharges field, qualified electricians are required. This paper presents an overview of the intelligent tutor adaptive learning design and user interface with VR. We propose the develop of constructing a model domain of a subset of partial discharges enables adaptive training through a trainee model which represents the affective and knowledge states of trainees. According to the success of the intelligent tutor system with VR, it is also hypothesized that the trainees will able to learn the electrical domain installations of partial discharges and gain knowledge more efficient and well trained than trainees using traditional methods of teaching without running any risk of being in danger, traditional methods makes training lengthily, costly and dangerously. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20tutoring%20system" title="intelligent tutoring system">intelligent tutoring system</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=partials%20discharges" title=" partials discharges"> partials discharges</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20learning" title=" adaptive learning"> adaptive learning</a> </p> <a href="https://publications.waset.org/abstracts/68020/intelligent-tutor-using-adaptive-learning-to-partial-discharges-with-virtual-reality-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2887</span> Intervention of Self-Limiting L1 Inner Speech during L2 Presentations: A Study of Bangla-English Bilinguals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdul%20Wahid">Abdul Wahid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Inner speech, also known as verbal thinking, self-talk or private speech, is characterized by the subjective language experience in the absence of overt or audible speech. It is a psychological form of verbal activity which is being rehearsed without the articulation of any sound wave. In Psychology, self-limiting speech means the type of speech which contains information that inhibits the development of the self. People, in most cases, experience inner speech in their first language. It is very frequent in Bangladesh where the Bangla (L1) speaking students lose track of speech during their presentations in English (L2). This paper investigates into the long pauses (more than 0.4 seconds long) in English (L2) presentations by Bangla speaking students (18-21 year old) and finds the intervention of Bangla (L1) inner speech as one of its causes. The overt speeches of the presenters are placed on Audacity Audio Editing software where the length of pauses are measured in milliseconds. Varieties of inner speech questionnaire (VISQ) have been conducted randomly amongst the participants out of whom 20 were selected who have similar phenomenology of inner speech. They have been interviewed to describe the type and content of the voices that went on in their head during the long pauses. The qualitative interview data are then codified and converted into quantitative data. It was observed that in more than 80% cases students experience self-limiting inner speech/self-talk during their unwanted pauses in L2 presentations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bangla-English%20Bilinguals" title="Bangla-English Bilinguals">Bangla-English Bilinguals</a>, <a href="https://publications.waset.org/abstracts/search?q=inner%20speech" title=" inner speech"> inner speech</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20intervention%20in%20bilingualism" title=" L1 intervention in bilingualism"> L1 intervention in bilingualism</a>, <a href="https://publications.waset.org/abstracts/search?q=motor%20schema" title=" motor schema"> motor schema</a>, <a href="https://publications.waset.org/abstracts/search?q=pauses" title=" pauses"> pauses</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20loop" title=" phonological loop"> phonological loop</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20store" title=" phonological store"> phonological store</a>, <a href="https://publications.waset.org/abstracts/search?q=working%20memory" title=" working memory"> working memory</a> </p> <a href="https://publications.waset.org/abstracts/128980/intervention-of-self-limiting-l1-inner-speech-during-l2-presentations-a-study-of-bangla-english-bilinguals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2886</span> Performance Evaluation of Acoustic-Spectrographic Voice Identification Method in Native and Non-Native Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Krasnova">E. Krasnova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Shchemelinin"> V. Shchemelinin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with acoustic-spectrographic voice identification method in terms of its performance in non-native language speech. Performance evaluation is conducted by comparing the result of the analysis of recordings containing native language speech with recordings that contain foreign language speech. Our research is based on Tajik and Russian speech of Tajik native speakers due to the character of the criminal situation with drug trafficking. We propose a pilot experiment that represents a primary attempt enter the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title="speaker identification">speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic-spectrographic%20method" title=" acoustic-spectrographic method"> acoustic-spectrographic method</a>, <a href="https://publications.waset.org/abstracts/search?q=non-native%20speech" title=" non-native speech"> non-native speech</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20evaluation" title=" performance evaluation"> performance evaluation</a> </p> <a href="https://publications.waset.org/abstracts/12496/performance-evaluation-of-acoustic-spectrographic-voice-identification-method-in-native-and-non-native-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2885</span> Automatic Segmentation of the Clean Speech Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Ben%20Messaoud">M. A. Ben Messaoud</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Bouzid"> A. Bouzid</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Ellouze"> N. Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The multi-scale product is based on making the product of the speech wavelet transform coefficients at three successive dyadic scales. We have evaluated our method on the Keele database. Experimental results show the effectiveness of our method presenting a good performance. It shows that the two simple features can find word boundaries, and extracted the segments of the clean speech. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiscale%20product" title="multiscale product">multiscale product</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20centroid" title=" spectral centroid"> spectral centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20segmentation" title=" speech segmentation"> speech segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=zero%20crossings%20rate" title=" zero crossings rate"> zero crossings rate</a> </p> <a href="https://publications.waset.org/abstracts/17566/automatic-segmentation-of-the-clean-speech-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">499</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2884</span> The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fawaz%20S.%20Al-Anzi">Fawaz S. Al-Anzi</a>, <a href="https://publications.waset.org/abstracts/search?q=Dia%20AbuZeina"> Dia AbuZeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title="speech recognition">speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20features" title=" acoustic features"> acoustic features</a>, <a href="https://publications.waset.org/abstracts/search?q=mel%20frequency" title=" mel frequency"> mel frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20coefficients" title=" cepstral coefficients"> cepstral coefficients</a> </p> <a href="https://publications.waset.org/abstracts/78382/the-capacity-of-mel-frequency-cepstral-coefficients-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2883</span> Eisenhower’s Farewell Speech: Initial and Continuing Communication Effects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Kuiper">B. Kuiper</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When Dwight D. Eisenhower delivered his final Presidential speech in 1961, he was using the opportunity to bid farewell to America, but he was also trying to warn his fellow countrymen about deeper challenges threatening the country. In this analysis, Eisenhower’s speech is examined in light of the impact it had on American culture, communication concepts, and political ramifications. The paper initially highlights the previous literature on the speech, especially in light of its 50<sup>th </sup>anniversary, and reveals a man whose main concern was how the speech’s words would affect his beloved country. The painstaking approach to the wording of the speech to reveal the intent is key, particularly in light of analyzing the motivations according to “virtuous communication.” This philosophical construct indicates that Eisenhower’s Farewell Address was crafted carefully according to a departing President’s deepest values and concerns, concepts that he wanted to pass along to his successor, to his country, and even to the world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eisenhower" title="Eisenhower">Eisenhower</a>, <a href="https://publications.waset.org/abstracts/search?q=mass%20communication" title=" mass communication"> mass communication</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20speech" title=" political speech"> political speech</a>, <a href="https://publications.waset.org/abstracts/search?q=rhetoric" title=" rhetoric"> rhetoric</a> </p> <a href="https://publications.waset.org/abstracts/50004/eisenhowers-farewell-speech-initial-and-continuing-communication-effects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2882</span> A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qianhua%20He">Qianhua He</a>, <a href="https://publications.waset.org/abstracts/search?q=Weili%20Zhou"> Weili Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Aiwu%20Chen"> Aiwu Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20denoising" title="speech denoising">speech denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=k-singular%20value%20decomposition" title=" k-singular value decomposition"> k-singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20matching%20pursuit" title=" orthogonal matching pursuit"> orthogonal matching pursuit</a> </p> <a href="https://publications.waset.org/abstracts/66670/a-sparse-representation-speech-denoising-method-based-on-adapted-stopping-residue-error" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">499</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2881</span> Speech Acts and Politeness Strategies in an EFL Classroom in Georgia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tinatin%20Kurdghelashvili">Tinatin Kurdghelashvili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with the usage of speech acts and politeness strategies in an EFL classroom in Georgia (Rep of). It explores the students’ and the teachers’ practice of the politeness strategies and the speech acts of apology, thanking, request, compliment/encouragement, command, agreeing/disagreeing, addressing and code switching. The research method includes observation as well as a questionnaire. The target group involves the students from Georgian public schools and two certified, experienced local English teachers. The analysis is based on Searle’s Speech Act Theory and Brown and Levinson’s politeness strategies. The findings show that the students have certain knowledge regarding politeness yet they fail to apply them in English communication. In addition, most of the speech acts from the classroom interaction are used by the teachers and not the students. Thereby, it is suggested that teachers should cultivate the students’ communicative competence and attempt to give them opportunities to practice more English speech acts than they do today. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=english%20as%20a%20foreign%20language" title="english as a foreign language">english as a foreign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Georgia" title=" Georgia"> Georgia</a>, <a href="https://publications.waset.org/abstracts/search?q=politeness%20principles" title=" politeness principles"> politeness principles</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20acts" title=" speech acts"> speech acts</a> </p> <a href="https://publications.waset.org/abstracts/17320/speech-acts-and-politeness-strategies-in-an-efl-classroom-in-georgia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17320.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">636</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=96">96</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=97">97</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=intelligent%20speech%20interface&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>