CINXE.COM
Search results for: voice features.
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: voice features.</title> <meta name="description" content="Search results for: voice features."> <meta name="keywords" content="voice features."> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="voice features." name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="voice features."> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1662</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: voice features.</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1662</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Emotion%20recognition" title="Emotion recognition">Emotion recognition</a>, <a href="https://publications.waset.org/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/search?q=machine%20learning." title=" machine learning. "> machine learning. </a> </p> <a href="https://publications.waset.org/10004221/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004221/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004221/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004221/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004221/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004221/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004221/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004221/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004221/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004221/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004221/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2018</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1661</span> Voice Features as the Diagnostic Marker of Autism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Elena%20Lyakso">Elena Lyakso</a>, <a href="https://publications.waset.org/search?q=Olga%20Frolova"> Olga Frolova</a>, <a href="https://publications.waset.org/search?q=Yuri%20Matveev"> Yuri Matveev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The aim of the study is to determine the acoustic features of voice and speech of children with autism spectrum disorders (ASD) as a possible additional diagnostic criterion. The participants in the study were 95 children with ASD aged 5-16 years, 150 typically development (TD) children, and 103 adults – listening to children’s speech samples. Three types of experimental methods for speech analysis were performed: spectrographic, perceptual by listeners, and automatic recognition. In the speech of children with ASD, the pitch values, pitch range, values of frequency and intensity of the third formant (emotional) leading to the “atypical” spectrogram of vowels are higher than corresponding parameters in the speech of TD children. High values of vowel articulation index (VAI) are specific for ASD children’s speech signals. These acoustic features can be considered as diagnostic marker of autism. The ability of humans and automatic recognition of the psychoneurological state of children via their speech is determined.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Autism%20spectrum%20disorders" title="Autism spectrum disorders">Autism spectrum disorders</a>, <a href="https://publications.waset.org/search?q=biomarker%20of%20autism" title=" biomarker of autism"> biomarker of autism</a>, <a href="https://publications.waset.org/search?q=child%20speech" title=" child speech"> child speech</a>, <a href="https://publications.waset.org/search?q=voice%20features." title=" voice features."> voice features.</a> </p> <a href="https://publications.waset.org/10012604/voice-features-as-the-diagnostic-marker-of-autism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012604/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012604/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012604/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012604/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012604/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012604/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012604/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012604/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012604/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012604/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">619</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1660</span> Automatic Voice Classification System Based on Traditional Korean Medicine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jaehwan%20Kang">Jaehwan Kang</a>, <a href="https://publications.waset.org/search?q=Haejung%20Lee"> Haejung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an automatic voice classification system for the diagnosis of individual constitution based on Sasang Constitutional Medicine (SCM) in Traditional Korean Medicine (TKM). For the developing of this algorithm, we used the voices of 309 female speakers and extracted a total of 134 speech features from the voice data consisting of 5 sustained vowels and one sentence. The classification system, based on a rule-based algorithm that is derived from a non parametric statistical method, presents 3 types of decisions: reserved, positive and negative decisions. In conclusion, 71.5% of the voice data were diagnosed by this system, of which 47.7% were correct positive decisions and 69.7% were correct negative decisions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Voice%20Classifier" title="Voice Classifier">Voice Classifier</a>, <a href="https://publications.waset.org/search?q=Sasang%20Constitution%20Medicine" title=" Sasang Constitution Medicine"> Sasang Constitution Medicine</a>, <a href="https://publications.waset.org/search?q=Traditional%20Korean%20Medicine" title=" Traditional Korean Medicine"> Traditional Korean Medicine</a>, <a href="https://publications.waset.org/search?q=SCM" title=" SCM"> SCM</a>, <a href="https://publications.waset.org/search?q=TKM." title=" TKM."> TKM.</a> </p> <a href="https://publications.waset.org/5204/automatic-voice-classification-system-based-on-traditional-korean-medicine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5204/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5204/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5204/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5204/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5204/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5204/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5204/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5204/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5204/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5204/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5204.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1389</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1659</span> The Effect of the Hemispheres of the Brain and the Tone of Voice on Persuasion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Rica%20Jell%20de%20Laza">Rica Jell de Laza</a>, <a href="https://publications.waset.org/search?q=Jose%20Alberto%20Fernandez"> Jose Alberto Fernandez</a>, <a href="https://publications.waset.org/search?q=Andrea%20Marie%20Mendoza"> Andrea Marie Mendoza</a>, <a href="https://publications.waset.org/search?q=Qristin%20Jeuel%20Regalado"> Qristin Jeuel Regalado</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This study investigates whether participants experience different levels of persuasion depending on the hemisphere of the brain and the tone of voice. The experiment was performed on 96 volunteer undergraduate students taking an introductory course in psychology. The participants took part in a 2 x 3 (Hemisphere: left, right x Tone of Voice: positive, neutral, negative) Mixed Factorial Design to measure how much a person was persuaded. Results showed that the hemisphere of the brain and the tone of voice used did not significantly affect the results individually. Furthermore, there was no interaction effect. Therefore, the hemispheres of the brain and the tone of voice employed play insignificant roles in persuading a person.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Dichotic%20listening" title="Dichotic listening">Dichotic listening</a>, <a href="https://publications.waset.org/search?q=brain%20hemisphere" title=" brain hemisphere"> brain hemisphere</a>, <a href="https://publications.waset.org/search?q=tone%20of%20voice" title=" tone of voice"> tone of voice</a>, <a href="https://publications.waset.org/search?q=persuasion." title=" persuasion."> persuasion.</a> </p> <a href="https://publications.waset.org/10007016/the-effect-of-the-hemispheres-of-the-brain-and-the-tone-of-voice-on-persuasion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10007016/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10007016/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10007016/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10007016/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10007016/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10007016/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10007016/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10007016/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10007016/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10007016/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10007016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1412</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1658</span> Transformation of Vocal Characteristics: A Review of Literature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dong-Yan%20Huang">Dong-Yan Huang</a>, <a href="https://publications.waset.org/search?q=Ee%20Ping%20Ong"> Ee Ping Ong</a>, <a href="https://publications.waset.org/search?q=Susanto%20Rahardja"> Susanto Rahardja</a>, <a href="https://publications.waset.org/search?q=Minghui%20Dong"> Minghui Dong</a>, <a href="https://publications.waset.org/search?q=Haizhou%20Li"> Haizhou Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Voice%20transformation" title="Voice transformation">Voice transformation</a>, <a href="https://publications.waset.org/search?q=Voice%20Quality" title=" Voice Quality"> Voice Quality</a>, <a href="https://publications.waset.org/search?q=Emotion" title=" Emotion"> Emotion</a>, <a href="https://publications.waset.org/search?q=Individuality" title="Individuality">Individuality</a>, <a href="https://publications.waset.org/search?q=Speaking%20Style" title=" Speaking Style"> Speaking Style</a>, <a href="https://publications.waset.org/search?q=Speech%20Production" title=" Speech Production"> Speech Production</a>, <a href="https://publications.waset.org/search?q=Speech%20Perception." title=" Speech Perception."> Speech Perception.</a> </p> <a href="https://publications.waset.org/4782/transformation-of-vocal-characteristics-a-review-of-literature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4782/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4782/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4782/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4782/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4782/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4782/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4782/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4782/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4782/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4782/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4782.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2043</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1657</span> The Functions of the Student Voice and Student-Centered Teaching Practices in Classroom-Based Music Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sofia%20Douklia">Sofia Douklia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The present context paper aims to present the important role of ‘student voice’ and the music teacher in the classroom, which contributes to more student-centered music education. The aim is to focus on the functions of the student voice through the music spectrum, which has been born in the music classroom, and the teacher’s methodologies and techniques used in the music classroom. The music curriculum, the principles of student-centered music education, and the role of students and teachers as music ambassadors have been considered the major music parameters of student voice. The student- voice is a worth-mentioning aspect of a student-centered education, and all teachers should consider and promote its existence in their classroom.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Student%E2%80%99s%20voice" title="Student’s voice">Student’s voice</a>, <a href="https://publications.waset.org/search?q=student-centered%20education" title=" student-centered education"> student-centered education</a>, <a href="https://publications.waset.org/search?q=music%20ambassadors" title=" music ambassadors"> music ambassadors</a>, <a href="https://publications.waset.org/search?q=music%20teachers." title=" music teachers."> music teachers.</a> </p> <a href="https://publications.waset.org/10013235/the-functions-of-the-student-voice-and-student-centered-teaching-practices-in-classroom-based-music-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10013235/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10013235/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10013235/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10013235/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10013235/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10013235/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10013235/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10013235/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10013235/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10013235/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10013235.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1656</span> Efficient DTW-Based Speech Recognition System for Isolated Words of Arabic Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Khalid%20A.%20Darabkh">Khalid A. Darabkh</a>, <a href="https://publications.waset.org/search?q=Ala%20F.%20Khalifeh"> Ala F. Khalifeh</a>, <a href="https://publications.waset.org/search?q=Baraa%20A.%20Bathech"> Baraa A. Bathech</a>, <a href="https://publications.waset.org/search?q=Saed%20W.%20Sabah"> Saed W. Sabah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Despite the fact that Arabic language is currently one of the most common languages worldwide, there has been only a little research on Arabic speech recognition relative to other languages such as English and Japanese. Generally, digital speech processing and voice recognition algorithms are of special importance for designing efficient, accurate, as well as fast automatic speech recognition systems. However, the speech recognition process carried out in this paper is divided into three stages as follows: firstly, the signal is preprocessed to reduce noise effects. After that, the signal is digitized and hearingized. Consequently, the voice activity regions are segmented using voice activity detection (VAD) algorithm. Secondly, features are extracted from the speech signal using Mel-frequency cepstral coefficients (MFCC) algorithm. Moreover, delta and acceleration (delta-delta) coefficients have been added for the reason of improving the recognition accuracy. Finally, each test word-s features are compared to the training database using dynamic time warping (DTW) algorithm. Utilizing the best set up made for all affected parameters to the aforementioned techniques, the proposed system achieved a recognition rate of about 98.5% which outperformed other HMM and ANN-based approaches available in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Arabic%20speech%20recognition" title="Arabic speech recognition">Arabic speech recognition</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=DTW" title=" DTW"> DTW</a>, <a href="https://publications.waset.org/search?q=VAD." title=" VAD."> VAD.</a> </p> <a href="https://publications.waset.org/9982/efficient-dtw-based-speech-recognition-system-for-isolated-words-of-arabic-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9982/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9982/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9982/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9982/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9982/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9982/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9982/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9982/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9982/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9982/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">4075</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1655</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Extreme%20learning" title="Extreme learning">Extreme learning</a>, <a href="https://publications.waset.org/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/search?q=voice%20recognition." title=" voice recognition. "> voice recognition. </a> </p> <a href="https://publications.waset.org/10010982/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10010982/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10010982/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10010982/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10010982/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10010982/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10010982/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10010982/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10010982/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10010982/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10010982/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10010982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">764</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1654</span> Recognition by Online Modeling – a New Approach of Recognizing Voice Signals in Linear Time</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jyh-Da%20Wei">Jyh-Da Wei</a>, <a href="https://publications.waset.org/search?q=Hsin-Chen%20Tsai"> Hsin-Chen Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a novel means of extracting fixedlength parameters from voice signals, such that words can be recognized in linear time. The power and the zero crossing rate are first calculated segment by segment from a voice signal; by doing so, two feature sequences are generated. We then construct an FIR system across these two sequences. The parameters of this FIR system, used as the input of a multilayer proceptron recognizer, can be derived by recursive LSE (least-square estimation), implying that the complexity of overall process is linear to the signal size. In the second part of this work, we introduce a weighting factor λ to emphasize recent input; therefore, we can further recognize continuous speech signals. Experiments employ the voice signals of numbers, from zero to nine, spoken in Mandarin Chinese. The proposed method is verified to recognize voice signals efficiently and accurately. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20Recognition" title="Speech Recognition">Speech Recognition</a>, <a href="https://publications.waset.org/search?q=FIR%20system" title=" FIR system"> FIR system</a>, <a href="https://publications.waset.org/search?q=Recursive%20LSE" title=" Recursive LSE"> Recursive LSE</a>, <a href="https://publications.waset.org/search?q=Multilayer%20Perceptron" title=" Multilayer Perceptron"> Multilayer Perceptron</a> </p> <a href="https://publications.waset.org/3663/recognition-by-online-modeling-a-new-approach-of-recognizing-voice-signals-in-linear-time" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3663/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3663/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3663/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3663/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3663/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3663/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3663/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3663/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3663/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3663/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3663.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1417</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1653</span> Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nazaket%20Gazieva">Nazaket Gazieva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometric%20voice%20prints" title="Biometric voice prints">Biometric voice prints</a>, <a href="https://publications.waset.org/search?q=fundamental%20frequency" title=" fundamental frequency"> fundamental frequency</a>, <a href="https://publications.waset.org/search?q=phonogram" title=" phonogram"> phonogram</a>, <a href="https://publications.waset.org/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/search?q=temporal%20characteristics." title=" temporal characteristics."> temporal characteristics.</a> </p> <a href="https://publications.waset.org/10011375/minimum-data-of-a-speech-signal-as-special-indicators-of-identification-in-phonoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011375/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011375/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011375/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011375/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011375/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011375/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011375/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011375/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011375/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011375/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011375.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">577</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1652</span> A Survey on Voice over IP over Wireless LANs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Haniyeh%20Kazemitabar">Haniyeh Kazemitabar</a>, <a href="https://publications.waset.org/search?q=Sameha%20Ahmed"> Sameha Ahmed</a>, <a href="https://publications.waset.org/search?q=Kashif%20Nisar"> Kashif Nisar</a>, <a href="https://publications.waset.org/search?q=Abas%20B%20Said"> Abas B Said</a>, <a href="https://publications.waset.org/search?q=Halabi%20B%20Hasbullah"> Halabi B Hasbullah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice over Internet Protocol (VoIP) is a form of voice communication that uses audio data to transmit voice signals to the end user. VoIP is one of the most important technologies in the World of communication. Around, 20 years of research on VoIP, some problems of VoIP are still remaining. During the past decade and with growing of wireless technologies, we have seen that many papers turn their concentration from Wired-LAN to Wireless-LAN. VoIP over Wireless LAN (WLAN) faces many challenges due to the loose nature of wireless network. Issues like providing Quality of Service (QoS) at a good level, dedicating capacity for calls and having secure calls is more difficult rather than wired LAN. Therefore VoIP over WLAN (VoWLAN) remains a challenging research topic. In this paper we consolidate and address major VoWLAN issues. This research is helpful for those researchers wants to do research in Voice over IP technology over WLAN network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Capacity" title="Capacity">Capacity</a>, <a href="https://publications.waset.org/search?q=QoS" title=" QoS"> QoS</a>, <a href="https://publications.waset.org/search?q=Security" title=" Security"> Security</a>, <a href="https://publications.waset.org/search?q=VoIP%20Issues" title=" VoIP Issues"> VoIP Issues</a>, <a href="https://publications.waset.org/search?q=WLAN." title=" WLAN."> WLAN.</a> </p> <a href="https://publications.waset.org/12438/a-survey-on-voice-over-ip-over-wireless-lans" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12438/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12438/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12438/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12438/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12438/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12438/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12438/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12438/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12438/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12438/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2245</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1651</span> VoIP Source Model based on the Hyperexponential Distribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Arkadiusz%20Biernacki">Arkadiusz Biernacki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a statistical analysis of Voice over IP (VoIP) packet streams produced by the G.711 voice coder with voice activity detection (VAD). During telephone conversation, depending whether the interlocutor speaks (ON) or remains silent (OFF), packets are produced or not by a voice coder. As index of dispersion for both ON and OFF times distribution was greater than one, we used hyperexponential distribution for approximation of streams duration. For each stage of the hyperexponential distribution, we tested goodness of our fits using graphical methods, we calculated estimation errors, and performed Kolmogorov-Smirnov test. Obtained results showed that the precise VoIP source model can be based on the five-state Markov process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=VoIP%20source%20modelling" title="VoIP source modelling">VoIP source modelling</a>, <a href="https://publications.waset.org/search?q=distribution%20approximation" title=" distribution approximation"> distribution approximation</a>, <a href="https://publications.waset.org/search?q=hyperexponential%20distribution." title=" hyperexponential distribution."> hyperexponential distribution.</a> </p> <a href="https://publications.waset.org/10390/voip-source-model-based-on-the-hyperexponential-distribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10390/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10390/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10390/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10390/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10390/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10390/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10390/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10390/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10390/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10390/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10390.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1710</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1650</span> Secure peerTalk Using PEERT System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nebu%20Tom%20John">Nebu Tom John</a>, <a href="https://publications.waset.org/search?q=N.%20Dhinakaran"> N. Dhinakaran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Multiparty voice over IP (MVoIP) systems allows a group of people to freely communicate each other via the internet, which have many applications such as online gaming, teleconferencing, online stock trading etc. Peertalk is a peer to peer multiparty voice over IP system (MVoIP) which is more feasible than existing approaches such as p2p overlay multicast and coupled distributed processing. Since the stream mixing and distribution are done by the peers, it is vulnerable to major security threats like nodes misbehavior, eavesdropping, Sybil attacks, Denial of Service (DoS), call tampering, Man in the Middle attacks etc. To thwart the security threats, a security framework called PEERTS (PEEred Reputed Trustworthy System for peertalk) is implemented so that efficient and secure communication can be carried out between peers.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Key%20management%20system" title="Key management system">Key management system</a>, <a href="https://publications.waset.org/search?q=peer-to-peer%20voice%0D%0Astreaming" title=" peer-to-peer voice streaming"> peer-to-peer voice streaming</a>, <a href="https://publications.waset.org/search?q=reputed%20trust%20management%20system" title=" reputed trust management system"> reputed trust management system</a>, <a href="https://publications.waset.org/search?q=voice-over-IP." title=" voice-over-IP."> voice-over-IP.</a> </p> <a href="https://publications.waset.org/10427/secure-peertalk-using-peert-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10427/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10427/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10427/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10427/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10427/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10427/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10427/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10427/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10427/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10427/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10427.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1882</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1649</span> Search Engine Module in Voice Recognition Browser to Facilitate the Visually Impaired in Virtual Learning (MGSYS VISI-VL)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nurulisma%20Ismail">Nurulisma Ismail</a>, <a href="https://publications.waset.org/search?q=Halimah%20Badioze%20Zaman"> Halimah Badioze Zaman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, web-based technologies influence in people-s daily life such as in education, business and others. Therefore, many web developers are too eager to develop their web applications with fully animation graphics and forgetting its accessibility to its users. Their purpose is to make their web applications look impressive. Thus, this paper would highlight on the usability and accessibility of a voice recognition browser as a tool to facilitate the visually impaired and blind learners in accessing virtual learning environment. More specifically, the objectives of the study are (i) to explore the challenges faced by the visually impaired learners in accessing virtual learning environment (ii) to determine the suitable guidelines for developing a voice recognition browser that is accessible to the visually impaired. Furthermore, this study was prepared based on an observation conducted with the Malaysian visually impaired learners. Finally, the result of this study would underline on the development of an accessible voice recognition browser for the visually impaired. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Accessibility" title="Accessibility">Accessibility</a>, <a href="https://publications.waset.org/search?q=Usability" title=" Usability"> Usability</a>, <a href="https://publications.waset.org/search?q=Virtual%20Learning" title=" Virtual Learning"> Virtual Learning</a>, <a href="https://publications.waset.org/search?q=Visually%0AImpaired" title=" Visually Impaired"> Visually Impaired</a>, <a href="https://publications.waset.org/search?q=Voice%20Recognition." title=" Voice Recognition."> Voice Recognition.</a> </p> <a href="https://publications.waset.org/5268/search-engine-module-in-voice-recognition-browser-to-facilitate-the-visually-impaired-in-virtual-learning-mgsys-visi-vl" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5268/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5268/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5268/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5268/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5268/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5268/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5268/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5268/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5268/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5268/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5268.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2040</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1648</span> Computationally Efficient Signal Quality Improvement Method for VoIP System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=H.%20P.%20Singh">H. P. Singh</a>, <a href="https://publications.waset.org/search?q=S.%20Singh"> S. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The voice signal in Voice over Internet protocol (VoIP) system is processed through the best effort policy based IP network, which leads to the network degradations including delay, packet loss jitter. The work in this paper presents the implementation of finite impulse response (FIR) filter for voice quality improvement in the VoIP system through distributed arithmetic (DA) algorithm. The VoIP simulations are conducted with AMR-NB 6.70 kbps and G.729a speech coders at different packet loss rates and the performance of the enhanced VoIP signal is evaluated using the perceptual evaluation of speech quality (PESQ) measurement for narrowband signal. The results show reduction in the computational complexity in the system and significant improvement in the quality of the VoIP voice signal.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=VoIP" title="VoIP">VoIP</a>, <a href="https://publications.waset.org/search?q=Signal%20Quality" title=" Signal Quality"> Signal Quality</a>, <a href="https://publications.waset.org/search?q=Distributed%20Arithmetic" title=" Distributed Arithmetic"> Distributed Arithmetic</a>, <a href="https://publications.waset.org/search?q=Packet%20Loss" title=" Packet Loss"> Packet Loss</a>, <a href="https://publications.waset.org/search?q=Speech%20Coder." title=" Speech Coder."> Speech Coder.</a> </p> <a href="https://publications.waset.org/6730/computationally-efficient-signal-quality-improvement-method-for-voip-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6730/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6730/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6730/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6730/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6730/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6730/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6730/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6730/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6730/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6730/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6730.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1830</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1647</span> Analysis of Vocal Fold Vibrations from High-Speed Digital Images Based On Dynamic Time Warping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20I.%20A.%20Rahman">A. I. A. Rahman</a>, <a href="https://publications.waset.org/search?q=Sh-Hussain%20Salleh"> Sh-Hussain Salleh</a>, <a href="https://publications.waset.org/search?q=K.%20Ahmad"> K. Ahmad</a>, <a href="https://publications.waset.org/search?q=K.%20Anuar"> K. Anuar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Analysis of vocal fold vibration is essential for understanding the mechanism of voice production and for improving clinical assessment of voice disorders. This paper presents a Dynamic Time Warping (DTW) based approach to analyze and objectively classify vocal fold vibration patterns. The proposed technique was designed and implemented on a Glottal Area Waveform (GAW) extracted from high-speed laryngeal images by delineating the glottal edges for each image frame. Feature extraction from the GAW was performed using Linear Predictive Coding (LPC). Several types of voice reference templates from simulations of clear, breathy, fry, pressed and hyperfunctional voice productions were used. The patterns of the reference templates were first verified using the analytical signal generated through Hilbert transformation of the GAW. Samples from normal speakers’ voice recordings were then used to evaluate and test the effectiveness of this approach. The classification of the voice patterns using the technique of LPC and DTW gave the accuracy of 81%.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Dynamic%20Time%20Warping" title="Dynamic Time Warping">Dynamic Time Warping</a>, <a href="https://publications.waset.org/search?q=Glottal%20Area%20Waveform" title=" Glottal Area Waveform"> Glottal Area Waveform</a>, <a href="https://publications.waset.org/search?q=Linear%20Predictive%20Coding" title=" Linear Predictive Coding"> Linear Predictive Coding</a>, <a href="https://publications.waset.org/search?q=High-Speed%20Laryngeal%20Images" title=" High-Speed Laryngeal Images"> High-Speed Laryngeal Images</a>, <a href="https://publications.waset.org/search?q=Hilbert%20Transform." title=" Hilbert Transform."> Hilbert Transform.</a> </p> <a href="https://publications.waset.org/9998404/analysis-of-vocal-fold-vibrations-from-high-speed-digital-images-based-on-dynamic-time-warping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998404/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998404/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998404/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998404/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998404/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998404/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998404/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998404/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998404/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998404/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2334</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1646</span> Voice Command Recognition System Based on MFCC and VQ Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mahdi%20Shaneh">Mahdi Shaneh</a>, <a href="https://publications.waset.org/search?q=Azizollah%20Taheri"> Azizollah Taheri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this project is to design a system to recognition voice commands. Most of voice recognition systems contain two main modules as follow “feature extraction" and “feature matching". In this project, MFCC algorithm is used to simulate feature extraction module. Using this algorithm, the cepstral coefficients are calculated on mel frequency scale. VQ (vector quantization) method will be used for reduction of amount of data to decrease computation time. In the feature matching stage Euclidean distance is applied as similarity criterion. Because of high accuracy of used algorithms, the accuracy of this voice command system is high. Using these algorithms, by at least 5 times repetition for each command, in a single training session, and then twice in each testing session zero error rate in recognition of commands is achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=MFCC" title="MFCC">MFCC</a>, <a href="https://publications.waset.org/search?q=Vector%20quantization" title=" Vector quantization"> Vector quantization</a>, <a href="https://publications.waset.org/search?q=Vocal%20tract" title=" Vocal tract"> Vocal tract</a>, <a href="https://publications.waset.org/search?q=Voicecommand." title=" Voicecommand."> Voicecommand.</a> </p> <a href="https://publications.waset.org/4967/voice-command-recognition-system-based-on-mfcc-and-vq-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4967/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4967/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4967/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4967/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4967/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4967/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4967/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4967/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4967/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4967/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3157</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1645</span> Vocal Training and Practice Methods: A Glimpse on the South Indian Carnatic Music</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Raghavi%20Janaswamy">Raghavi Janaswamy</a>, <a href="https://publications.waset.org/search?q=Saraswathi%20K.%20Vasudev"> Saraswathi K. Vasudev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Music is one of the supreme arts of expressions, next to the speech itself. Its evolution over centuries has paved the way with a variety of training protocols and performing methods. Indian classical music is one of the most elaborate and refined systems with immense emphasis on the voice culture related to range, breath control, quality of the tone, flexibility and diction. Several exercises namely saraliswaram, jantaswaram, dhatuswaram, upper stayi swaram, alamkaras and varnams lay the required foundation to gain the voice culture and deeper understanding on the voice development and further on to the intricacies of the raga system. This article narrates a few of the Carnatic music training methods with an emphasis on the advanced practice methods for articulating the vocal skills, continuity in the voice, ability to produce gamakams, command in the multiple speeds of rendering with reasonable volume. The creativity on these exercises and their impact on the voice production are discussed. The articulation of the outlined conscious practice methods and vocal exercises bestow the optimum use of the natural human vocal system to not only enhance the signing quality but also to gain health benefits. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Carnatic%20music" title="Carnatic music">Carnatic music</a>, <a href="https://publications.waset.org/search?q=Saraliswaram" title=" Saraliswaram"> Saraliswaram</a>, <a href="https://publications.waset.org/search?q=Varnam" title=" Varnam"> Varnam</a>, <a href="https://publications.waset.org/search?q=Vocal%20training." title=" Vocal training. "> Vocal training. </a> </p> <a href="https://publications.waset.org/10011692/vocal-training-and-practice-methods-a-glimpse-on-the-south-indian-carnatic-music" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011692/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011692/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011692/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011692/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011692/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011692/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011692/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011692/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011692/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011692/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">785</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1644</span> Independent Encryption Technique for Mobile Voice Calls</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nael%20Hirzalla">Nael Hirzalla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The legality of some countries or agencies’ acts to spy on personal phone calls of the public became a hot topic to many social groups’ talks. It is believed that this act is considered an invasion to someone’s privacy. Such act may be justified if it is singling out specific cases but to spy without limits is very unacceptable. This paper discusses the needs for not only a simple and light weight technique to secure mobile voice calls but also a technique that is independent from any encryption standard or library. It then presents and tests one encrypting algorithm that is based of Frequency scrambling technique to show fair and delay-free process that can be used to protect phone calls from such spying acts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Frequency%20Scrambling" title="Frequency Scrambling">Frequency Scrambling</a>, <a href="https://publications.waset.org/search?q=Mobile%20Applications" title=" Mobile Applications"> Mobile Applications</a>, <a href="https://publications.waset.org/search?q=Real-%0D%0ATime%20Voice%20Encryption" title=" Real- Time Voice Encryption"> Real- Time Voice Encryption</a>, <a href="https://publications.waset.org/search?q=Spying%20on%20Calls." title=" Spying on Calls."> Spying on Calls.</a> </p> <a href="https://publications.waset.org/10001839/independent-encryption-technique-for-mobile-voice-calls" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10001839/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10001839/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10001839/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10001839/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10001839/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10001839/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10001839/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10001839/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10001839/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10001839/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10001839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2557</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1643</span> Online Collaborative Learning System Using Speech Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sid-Ahmed.%20Selouani">Sid-Ahmed. Selouani</a>, <a href="https://publications.waset.org/search?q=Tang-Ho%20L%C3%AA"> Tang-Ho Lê</a>, <a href="https://publications.waset.org/search?q=Chadia%20Moghrabi"> Chadia Moghrabi</a>, <a href="https://publications.waset.org/search?q=Benoit%20Lanteigne"> Benoit Lanteigne</a>, <a href="https://publications.waset.org/search?q=Jean%20Roy"> Jean Roy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A Web-based learning tool, the Learn IN Context (LINC) system, designed and being used in some institution-s courses in mixed-mode learning, is presented in this paper. This mode combines face-to-face and distance approaches to education. LINC can achieve both collaborative and competitive learning. In order to provide both learners and tutors with a more natural way to interact with e-learning applications, a conversational interface has been included in LINC. Hence, the components and essential features of LINC+, the voice enhanced version of LINC, are described. We report evaluation experiments of LINC/LINC+ in a real use context of a computer programming course taught at the Université de Moncton (Canada). The findings show that when the learning material is delivered in the form of a collaborative and voice-enabled presentation, the majority of learners seem to be satisfied with this new media, and confirm that it does not negatively affect their cognitive load. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=E-leaning" title="E-leaning">E-leaning</a>, <a href="https://publications.waset.org/search?q=Knowledge%20Network" title=" Knowledge Network"> Knowledge Network</a>, <a href="https://publications.waset.org/search?q=Speech%20recognition" title=" Speech recognition"> Speech recognition</a>, <a href="https://publications.waset.org/search?q=Speech%20synthesis." title=" Speech synthesis."> Speech synthesis.</a> </p> <a href="https://publications.waset.org/3414/online-collaborative-learning-system-using-speech-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3414/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3414/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3414/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3414/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3414/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3414/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3414/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3414/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3414/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3414/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3414.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1713</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1642</span> A Security Model of Voice Eavesdropping Protection over Digital Networks </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Supachai%20Tangwongsan">Supachai Tangwongsan</a>, <a href="https://publications.waset.org/search?q=Sathaporn%20Kassuvan"> Sathaporn Kassuvan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Computer%20Security" title="Computer Security">Computer Security</a>, <a href="https://publications.waset.org/search?q=Encryption" title=" Encryption"> Encryption</a>, <a href="https://publications.waset.org/search?q=Key%20Exchange" title=" Key Exchange"> Key Exchange</a>, <a href="https://publications.waset.org/search?q=Security%20Model" title="Security Model">Security Model</a>, <a href="https://publications.waset.org/search?q=Voice%20Eavesdropping." title=" Voice Eavesdropping."> Voice Eavesdropping.</a> </p> <a href="https://publications.waset.org/3240/a-security-model-of-voice-eavesdropping-protection-over-digital-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3240/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3240/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3240/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3240/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3240/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3240/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3240/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3240/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3240/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3240/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1581</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1641</span> High-Individuality Voice Conversion Based on Concatenative Speech Synthesis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kei%20Fujii">Kei Fujii</a>, <a href="https://publications.waset.org/search?q=Jun%20Okawa"> Jun Okawa</a>, <a href="https://publications.waset.org/search?q=Kaori%20Suigetsu"> Kaori Suigetsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Concatenative speech synthesis is a method that can make speech sound which has naturalness and high-individuality of a speaker by introducing a large speech corpus. Based on this method, in this paper, we propose a voice conversion method whose conversion speech has high-individuality and naturalness. The authors also have two subjective evaluation experiments for evaluating individuality and sound quality of conversion speech. From the results, following three facts have be confirmed: (a) the proposal method can convert the individuality of speakers well, (b) employing the framework of unit selection (especially join cost) of concatenative speech synthesis into conventional voice conversion improves the sound quality of conversion speech, and (c) the proposal method is robust against the difference of genders between a source speaker and a target speaker. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=concatenative%20speech%20synthesis" title="concatenative speech synthesis">concatenative speech synthesis</a>, <a href="https://publications.waset.org/search?q=join%20cost" title=" join cost"> join cost</a>, <a href="https://publications.waset.org/search?q=speaker%20individuality" title=" speaker individuality"> speaker individuality</a>, <a href="https://publications.waset.org/search?q=unit%20selection" title=" unit selection"> unit selection</a>, <a href="https://publications.waset.org/search?q=voice%20conversion" title=" voice conversion"> voice conversion</a> </p> <a href="https://publications.waset.org/1272/high-individuality-voice-conversion-based-on-concatenative-speech-synthesis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1272/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1272/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1272/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1272/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1272/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1272/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1272/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1272/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1272/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1272/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1272.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1939</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1640</span> Automotive 3-Microphone Noise Canceller in a Frequently Moving Noise Source Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Z.%20Qi">Z. Qi</a>, <a href="https://publications.waset.org/search?q=T.%20J.%20Moir"> T. J. Moir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A combined three-microphone voice activity detector (VAD) and noise-canceling system is studied to enhance speech recognition in an automobile environment. A previous experiment clearly shows the ability of the composite system to cancel a single noise source outside of a defined zone. This paper investigates the performance of the composite system when there are frequently moving noise sources (noise sources are coming from different locations but are not always presented at the same time) e.g. there is other passenger speech or speech from a radio when a desired speech is presented. To work in a frequently moving noise sources environment, whilst a three-microphone voice activity detector (VAD) detects voice from a “VAD valid zone", the 3-microphone noise canceller uses a “noise canceller valid zone" defined in freespace around the users head. Therefore, a desired voice should be in the intersection of the noise canceller valid zone and VAD valid zone. Thus all noise is suppressed outside this intersection of area. Experiments are shown for a real environment e.g. all results were recorded in a car by omni-directional electret condenser microphones.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Signal%20processing" title="Signal processing">Signal processing</a>, <a href="https://publications.waset.org/search?q=voice%20activity%20detection" title=" voice activity detection"> voice activity detection</a>, <a href="https://publications.waset.org/search?q=noise%20canceller" title=" noise canceller"> noise canceller</a>, <a href="https://publications.waset.org/search?q=microphone%20array%20beam%20forming." title=" microphone array beam forming."> microphone array beam forming.</a> </p> <a href="https://publications.waset.org/12708/automotive-3-microphone-noise-canceller-in-a-frequently-moving-noise-source-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12708/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12708/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12708/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12708/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12708/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12708/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12708/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12708/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12708/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12708/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12708.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1612</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1639</span> Gait Biometric for Person Re-Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Lavanya%20Srinivasan">Lavanya Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat and case recorded using long wave infrared, short wave infrared, medium wave infrared and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using You Only Look Once, background subtraction, silhouettes extraction and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the Principal Component Analysis and recognized using different classifiers. The comparative results with the different classifier show that Linear Discriminant Analysis outperform other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=biometric" title="biometric">biometric</a>, <a href="https://publications.waset.org/search?q=gait" title=" gait"> gait</a>, <a href="https://publications.waset.org/search?q=silhouettes" title=" silhouettes"> silhouettes</a>, <a href="https://publications.waset.org/search?q=You%20Only%20Look%20Once" title=" You Only Look Once"> You Only Look Once</a> </p> <a href="https://publications.waset.org/10012344/gait-biometric-for-person-re-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012344/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012344/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012344/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012344/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012344/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012344/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012344/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012344/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012344/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012344/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012344.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">531</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1638</span> Voice Driven Applications in Non-stationary and Chaotic Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=C.%20Kwan">C. Kwan</a>, <a href="https://publications.waset.org/search?q=X.%20Li"> X. Li</a>, <a href="https://publications.waset.org/search?q=D.%20Lao"> D. Lao</a>, <a href="https://publications.waset.org/search?q=Y.%20Deng"> Y. Deng</a>, <a href="https://publications.waset.org/search?q=Z.%20Ren"> Z. Ren</a>, <a href="https://publications.waset.org/search?q=B.%20Raj"> B. Raj</a>, <a href="https://publications.waset.org/search?q=R.%20Singh"> R. Singh</a>, <a href="https://publications.waset.org/search?q=R.%20Stern"> R. Stern</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Automated operations based on voice commands will become more and more important in many applications, including robotics, maintenance operations, etc. However, voice command recognition rates drop quite a lot under non-stationary and chaotic noise environments. In this paper, we tried to significantly improve the speech recognition rates under non-stationary noise environments. First, 298 Navy acronyms have been selected for automatic speech recognition. Data sets were collected under 4 types of noisy environments: factory, buccaneer jet, babble noise in a canteen, and destroyer. Within each noisy environment, 4 levels (5 dB, 15 dB, 25 dB, and clean) of Signal-to-Noise Ratio (SNR) were introduced to corrupt the speech. Second, a new algorithm to estimate speech or no speech regions has been developed, implemented, and evaluated. Third, extensive simulations were carried out. It was found that the combination of the new algorithm, the proper selection of language model and a customized training of the speech recognizer based on clean speech yielded very high recognition rates, which are between 80% and 90% for the four different noisy conditions. Fourth, extensive comparative studies have also been carried out.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Non-stationary" title="Non-stationary">Non-stationary</a>, <a href="https://publications.waset.org/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/search?q=voice%20commands." title=" voice commands."> voice commands.</a> </p> <a href="https://publications.waset.org/9177/voice-driven-applications-in-non-stationary-and-chaotic-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9177/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9177/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9177/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9177/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9177/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9177/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9177/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9177/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9177/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9177/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9177.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1533</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1637</span> Through Biometric Card in Romania: Person Identification by Face, Fingerprint and Voice Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hariton%20N.%20Costin">Hariton N. Costin</a>, <a href="https://publications.waset.org/search?q=Iulian%20Ciocoiu"> Iulian Ciocoiu</a>, <a href="https://publications.waset.org/search?q=Tudor%20Barbu"> Tudor Barbu</a>, <a href="https://publications.waset.org/search?q=Cristian%20Rotariu"> Cristian Rotariu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper three different approaches for person verification and identification, i.e. by means of fingerprints, face and voice recognition, are studied. Face recognition uses parts-based representation methods and a manifold learning approach. The assessment criterion is recognition accuracy. The techniques under investigation are: a) Local Non-negative Matrix Factorization (LNMF); b) Independent Components Analysis (ICA); c) NMF with sparse constraints (NMFsc); d) Locality Preserving Projections (Laplacianfaces). Fingerprint detection was approached by classical minutiae (small graphical patterns) matching through image segmentation by using a structural approach and a neural network as decision block. As to voice / speaker recognition, melodic cepstral and delta delta mel cepstral analysis were used as main methods, in order to construct a supervised speaker-dependent voice recognition system. The final decision (e.g. “accept-reject" for a verification task) is taken by using a majority voting technique applied to the three biometrics. The preliminary results, obtained for medium databases of fingerprints, faces and voice recordings, indicate the feasibility of our study and an overall recognition precision (about 92%) permitting the utilization of our system for a future complex biometric card. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometry" title="Biometry">Biometry</a>, <a href="https://publications.waset.org/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/search?q=speech%20analysis." title=" speech analysis."> speech analysis.</a> </p> <a href="https://publications.waset.org/8832/through-biometric-card-in-romania-person-identification-by-face-fingerprint-and-voice-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8832/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8832/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8832/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8832/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8832/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8832/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8832/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8832/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8832/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8832/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8832.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1944</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1636</span> Performance Assessment in a Voice Coil Motor for Maximizing the Energy Harvesting with Gait Motions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hector%20A.%20Tinoco">Hector A. Tinoco</a>, <a href="https://publications.waset.org/search?q=Cesar%20Garcia-Diaz"> Cesar Garcia-Diaz</a>, <a href="https://publications.waset.org/search?q=Olga%20L.%20Ocampo-Lopez"> Olga L. Ocampo-Lopez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this study, an experimental approach is established to assess the performance of different beams coupled to a Voice Coil Motor (VCM) with the aim to maximize mechanically the energy harvesting in the inductive transducer that is included on it. The VCM is extracted from a recycled hard disk drive (HDD) and it is adapted for carrying out experimental tests of energy harvesting. Two individuals were selected for walking with the VCM-beam device as well as to evaluate the performance varying two parameters in the beam; length of the beams and a mass addition. Results show that the energy harvesting is maximized with specific beams; however, the harvesting efficiency is improved when a mass is added to the end of the beams.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Hard%20disk%20drive" title="Hard disk drive">Hard disk drive</a>, <a href="https://publications.waset.org/search?q=HDD" title=" HDD"> HDD</a>, <a href="https://publications.waset.org/search?q=energy%20harvesting" title=" energy harvesting"> energy harvesting</a>, <a href="https://publications.waset.org/search?q=voice%20coil%20motor" title=" voice coil motor"> voice coil motor</a>, <a href="https://publications.waset.org/search?q=VCM" title=" VCM"> VCM</a>, <a href="https://publications.waset.org/search?q=energy%20harvester" title=" energy harvester"> energy harvester</a>, <a href="https://publications.waset.org/search?q=gait%20motions." title=" gait motions."> gait motions.</a> </p> <a href="https://publications.waset.org/10006240/performance-assessment-in-a-voice-coil-motor-for-maximizing-the-energy-harvesting-with-gait-motions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10006240/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10006240/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10006240/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10006240/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10006240/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10006240/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10006240/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10006240/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10006240/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10006240/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10006240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1484</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1635</span> Speech Activated Automation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Rui%20Antunes">Rui Antunes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This article presents a simple way to perform programmed voice commands for the interface with commercial Digital and Analogue Input/Output PCI cards, used in Robotics and Automation applications. Robots and Automation equipment can "listen" to voice commands and perform several different tasks, approaching to the human behavior, and improving the human- machine interfaces for the Automation Industry. Since most PCI Digital and Analogue Input/Output cards are sold with several DLLs included (for use with different programming languages), it is possible to add speech recognition capability, using a standard speech recognition engine, compatible with the programming languages used. It was created in this work a Visual Basic 6 (the world's most popular language) application, that listens to several voice commands, and is capable to communicate directly with several standard 128 Digital I/O PCI Cards, used to control complete Automation Systems, with up to (number of boards used) x 128 Sensors and/or Actuators.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20Recognition" title="Speech Recognition">Speech Recognition</a>, <a href="https://publications.waset.org/search?q=Automation" title=" Automation"> Automation</a>, <a href="https://publications.waset.org/search?q=Robotics." title=" Robotics."> Robotics.</a> </p> <a href="https://publications.waset.org/4201/speech-activated-automation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4201/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4201/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4201/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4201/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4201/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4201/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4201/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4201/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4201/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4201/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4201.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1835</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1634</span> Comparative Study of Affricate Initial Consonants in Chinese and Slovak</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Maria%20Istvanova">Maria Istvanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The purpose of the comparative study of the affricate consonants in Chinese and Slovak is to increase the awareness of the main distinguishing features between these two languages taking into consideration this particular group of consonants. We determine the main difficulties of the Slovak learners in the process of acquiring correct pronunciation of affricate initial consonants in Chinese based on the understanding of the distinguishing features of Chinese and Slovak affricates in combination with the experimental measuring of voice onset time (VOT) values. The software tool Praat is used for the analysis of the recorded language samples. The language samples contain recordings of a Chinese native speaker and Slovak students of Chinese with different language proficiency levels. Based on the results of the analysis in Praat, we identify erroneous pronunciation and provide clarification of its cause.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Chinese" title="Chinese">Chinese</a>, <a href="https://publications.waset.org/search?q=comparative%20study" title=" comparative study"> comparative study</a>, <a href="https://publications.waset.org/search?q=initial%20consonants" title=" initial consonants"> initial consonants</a>, <a href="https://publications.waset.org/search?q=pronunciation" title=" pronunciation"> pronunciation</a>, <a href="https://publications.waset.org/search?q=Slovak" title=" Slovak"> Slovak</a> </p> <a href="https://publications.waset.org/10012295/comparative-study-of-affricate-initial-consonants-in-chinese-and-slovak" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012295/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012295/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012295/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012295/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012295/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012295/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012295/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012295/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012295/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012295/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012295.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">475</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1633</span> A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Thomas%20Bryan">Thomas Bryan</a>, <a href="https://publications.waset.org/search?q=Veton%20Kepuska"> Veton Kepuska</a>, <a href="https://publications.waset.org/search?q=Ivica%20Kostanic"> Ivica Kostanic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30d B SNR as a reference for voice activity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Atomic%20Decomposition" title="Atomic Decomposition">Atomic Decomposition</a>, <a href="https://publications.waset.org/search?q=Gabor" title=" Gabor"> Gabor</a>, <a href="https://publications.waset.org/search?q=Gammatone" title=" Gammatone"> Gammatone</a>, <a href="https://publications.waset.org/search?q=Matching%20Pursuit" title=" Matching Pursuit"> Matching Pursuit</a>, <a href="https://publications.waset.org/search?q=Voice%20Activity%20Detection." title=" Voice Activity Detection."> Voice Activity Detection.</a> </p> <a href="https://publications.waset.org/10001492/a-simple-adaptive-atomic-decomposition-voice-activity-detector-implemented-by-matching-pursuit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10001492/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10001492/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10001492/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10001492/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10001492/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10001492/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10001492/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10001492/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10001492/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10001492/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10001492.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1793</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=55">55</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=56">56</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=voice%20features.&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>