CINXE.COM
Search results for: tone of voice
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: tone of voice</title> <meta name="description" content="Search results for: tone of voice"> <meta name="keywords" content="tone of voice"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="tone of voice" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="tone of voice"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 166</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: tone of voice</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">166</span> The Effect of the Hemispheres of the Brain and the Tone of Voice on Persuasion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Rica%20Jell%20de%20Laza">Rica Jell de Laza</a>, <a href="https://publications.waset.org/search?q=Jose%20Alberto%20Fernandez"> Jose Alberto Fernandez</a>, <a href="https://publications.waset.org/search?q=Andrea%20Marie%20Mendoza"> Andrea Marie Mendoza</a>, <a href="https://publications.waset.org/search?q=Qristin%20Jeuel%20Regalado"> Qristin Jeuel Regalado</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This study investigates whether participants experience different levels of persuasion depending on the hemisphere of the brain and the tone of voice. The experiment was performed on 96 volunteer undergraduate students taking an introductory course in psychology. The participants took part in a 2 x 3 (Hemisphere: left, right x Tone of Voice: positive, neutral, negative) Mixed Factorial Design to measure how much a person was persuaded. Results showed that the hemisphere of the brain and the tone of voice used did not significantly affect the results individually. Furthermore, there was no interaction effect. Therefore, the hemispheres of the brain and the tone of voice employed play insignificant roles in persuading a person.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Dichotic%20listening" title="Dichotic listening">Dichotic listening</a>, <a href="https://publications.waset.org/search?q=brain%20hemisphere" title=" brain hemisphere"> brain hemisphere</a>, <a href="https://publications.waset.org/search?q=tone%20of%20voice" title=" tone of voice"> tone of voice</a>, <a href="https://publications.waset.org/search?q=persuasion." title=" persuasion."> persuasion.</a> </p> <a href="https://publications.waset.org/10007016/the-effect-of-the-hemispheres-of-the-brain-and-the-tone-of-voice-on-persuasion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10007016/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10007016/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10007016/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10007016/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10007016/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10007016/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10007016/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10007016/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10007016/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10007016/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10007016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1413</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">165</span> Use of Segmentation and Color Adjustment for Skin Tone Classification in Dermatological Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=F.%20Duarte">F. Duarte</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The work aims to evaluate the use of classical image processing methodologies towards skin tone classification in dermatological images. The skin tone is an important attribute when considering several factor for skin cancer diagnosis. Currently, there is a lack of clear methodologies to classify the skin tone based only on the dermatological image. In this work, a recent released dataset with the label for skin tone was used as reference for the evaluation of classical methodologies for segmentation and adjustment of color space for classification of skin tone in dermatological images. It was noticed that even though the classical methodologies can work fine for segmentation and color adjustment, classifying the skin tone without proper control of the acquisition of the sample images ended being very unreliable.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Segmentation" title="Segmentation">Segmentation</a>, <a href="https://publications.waset.org/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/search?q=color%20space" title=" color space"> color space</a>, <a href="https://publications.waset.org/search?q=skin%20tone" title=" skin tone"> skin tone</a>, <a href="https://publications.waset.org/search?q=Fitzpatrick." title=" Fitzpatrick."> Fitzpatrick.</a> </p> <a href="https://publications.waset.org/10013880/use-of-segmentation-and-color-adjustment-for-skin-tone-classification-in-dermatological-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10013880/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10013880/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10013880/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10013880/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10013880/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10013880/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10013880/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10013880/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10013880/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10013880/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10013880.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">19</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">164</span> A Comparative Study of Various Tone Mapping Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=YasirSalih">YasirSalih</a>, <a href="https://publications.waset.org/search?q=AamirSaeed%20Malik"> AamirSaeed Malik</a>, <a href="https://publications.waset.org/search?q=Wazirahbt.Md-Esa"> Wazirahbt.Md-Esa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the recent years, high dynamic range imaging has gain popularity with the advancement in digital photography. In this contribution we present a subjective evaluation of various tone production and tone mapping techniques by a number of participants. Firstly, standard HDR images were used and the participants were asked to rate them based on a given rating scheme. After that, the participant was asked to rate HDR image generated using linear and nonlinear combination approach of multiple exposure images. The experimental results showed that linearly generated HDR images have better visualization than the nonlinear combined ones. In addition, Reinhard et al. and the exponential tone mapping operators have shown better results compared to logarithmic and the Garrett et al. tone mapping operators. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=tone%20mapping" title="tone mapping">tone mapping</a>, <a href="https://publications.waset.org/search?q=high%20dynamic%20range" title=" high dynamic range"> high dynamic range</a>, <a href="https://publications.waset.org/search?q=low%20dynamic%0Arange" title=" low dynamic range"> low dynamic range</a>, <a href="https://publications.waset.org/search?q=bits%20per%20pixel." title=" bits per pixel."> bits per pixel.</a> </p> <a href="https://publications.waset.org/9373/a-comparative-study-of-various-tone-mapping-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9373/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9373/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9373/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9373/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9373/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9373/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9373/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9373/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9373/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9373/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9373.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3352</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">163</span> Vocal Training and Practice Methods: A Glimpse on the South Indian Carnatic Music</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Raghavi%20Janaswamy">Raghavi Janaswamy</a>, <a href="https://publications.waset.org/search?q=Saraswathi%20K.%20Vasudev"> Saraswathi K. Vasudev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Music is one of the supreme arts of expressions, next to the speech itself. Its evolution over centuries has paved the way with a variety of training protocols and performing methods. Indian classical music is one of the most elaborate and refined systems with immense emphasis on the voice culture related to range, breath control, quality of the tone, flexibility and diction. Several exercises namely saraliswaram, jantaswaram, dhatuswaram, upper stayi swaram, alamkaras and varnams lay the required foundation to gain the voice culture and deeper understanding on the voice development and further on to the intricacies of the raga system. This article narrates a few of the Carnatic music training methods with an emphasis on the advanced practice methods for articulating the vocal skills, continuity in the voice, ability to produce gamakams, command in the multiple speeds of rendering with reasonable volume. The creativity on these exercises and their impact on the voice production are discussed. The articulation of the outlined conscious practice methods and vocal exercises bestow the optimum use of the natural human vocal system to not only enhance the signing quality but also to gain health benefits. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Carnatic%20music" title="Carnatic music">Carnatic music</a>, <a href="https://publications.waset.org/search?q=Saraliswaram" title=" Saraliswaram"> Saraliswaram</a>, <a href="https://publications.waset.org/search?q=Varnam" title=" Varnam"> Varnam</a>, <a href="https://publications.waset.org/search?q=Vocal%20training." title=" Vocal training. "> Vocal training. </a> </p> <a href="https://publications.waset.org/10011692/vocal-training-and-practice-methods-a-glimpse-on-the-south-indian-carnatic-music" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011692/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011692/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011692/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011692/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011692/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011692/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011692/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011692/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011692/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011692/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">785</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">162</span> Statistical Modeling of Mandarin Tone Sandhi: Neutralization of Underlying Pitch Targets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Si%20Chen">Si Chen</a>, <a href="https://publications.waset.org/search?q=Caroline%20Wiltshire"> Caroline Wiltshire</a>, <a href="https://publications.waset.org/search?q=Bin%20Li"> Bin Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This study statistically models the surface f0 contour and the underlying pitch target of a well-studied third sandhi tone of Mandarin Chinese. Although the growth curve analysis on the surface f0 contours indicates non-neutralization of this sandhi tone (T3) and the base T2, their underlying pitch targets do show neutralization. These results in Mandarin are also consistent with the perception of native speakers, where they cannot distinguish the third T3 from the base T2, compensating contextual variation. It is possible to use the proposed statistical procedure of testing underlying pitch targets to verify tone sandhi processes in other tonal languages.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Growth%20curve%20analysis" title="Growth curve analysis">Growth curve analysis</a>, <a href="https://publications.waset.org/search?q=tone%20sandhi" title=" tone sandhi"> tone sandhi</a>, <a href="https://publications.waset.org/search?q=underlying%20pitch%20targets." title=" underlying pitch targets."> underlying pitch targets.</a> </p> <a href="https://publications.waset.org/10007000/statistical-modeling-of-mandarin-tone-sandhi-neutralization-of-underlying-pitch-targets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10007000/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10007000/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10007000/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10007000/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10007000/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10007000/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10007000/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10007000/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10007000/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10007000/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10007000.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">974</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">161</span> Design and Construction of Microcontroller-Based Telephone Exchange System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Aye%20Sandar%20Win">Aye Sandar Win</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper demonstrates design and construction of microcontroller-based telephone exchange system and the aims of this paper is to study telecommunication, connection with PIC16F877A and DTMF MT8870D. In microcontroller system, PIC 16F877 microcontroller is used to control the call processing. Dial tone, busy tone and ring tone are provided during call progress. Instead of using ready made tone generator IC, oscillator based tone generator is used. The results of this telephone exchange system are perfect for homes and small businesses needing the extensions. It requires the phone operation control system, the analog interface circuit and the switching circuit. This exchange design will contain eight channels. It is the best low cost, good quality telephone exchange for today-s telecommunication needs. It offers the features available in much more expensive PBX units without using high-priced phones. It is for long distance telephone services. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Control%20software" title="Control software">Control software</a>, <a href="https://publications.waset.org/search?q=DTMF%20receiver%20and%20decoder" title=" DTMF receiver and decoder"> DTMF receiver and decoder</a>, <a href="https://publications.waset.org/search?q=hooksensing" title=" hooksensing"> hooksensing</a>, <a href="https://publications.waset.org/search?q=microcontroller%20system" title=" microcontroller system"> microcontroller system</a>, <a href="https://publications.waset.org/search?q=power%20supply" title=" power supply"> power supply</a>, <a href="https://publications.waset.org/search?q=ring%20generator%20andoscillator%20based%20tone%20generator." title=" ring generator andoscillator based tone generator."> ring generator andoscillator based tone generator.</a> </p> <a href="https://publications.waset.org/3916/design-and-construction-of-microcontroller-based-telephone-exchange-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3916/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3916/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3916/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3916/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3916/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3916/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3916/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3916/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3916/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3916/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">7719</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">160</span> Transformation of Vocal Characteristics: A Review of Literature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dong-Yan%20Huang">Dong-Yan Huang</a>, <a href="https://publications.waset.org/search?q=Ee%20Ping%20Ong"> Ee Ping Ong</a>, <a href="https://publications.waset.org/search?q=Susanto%20Rahardja"> Susanto Rahardja</a>, <a href="https://publications.waset.org/search?q=Minghui%20Dong"> Minghui Dong</a>, <a href="https://publications.waset.org/search?q=Haizhou%20Li"> Haizhou Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Voice%20transformation" title="Voice transformation">Voice transformation</a>, <a href="https://publications.waset.org/search?q=Voice%20Quality" title=" Voice Quality"> Voice Quality</a>, <a href="https://publications.waset.org/search?q=Emotion" title=" Emotion"> Emotion</a>, <a href="https://publications.waset.org/search?q=Individuality" title="Individuality">Individuality</a>, <a href="https://publications.waset.org/search?q=Speaking%20Style" title=" Speaking Style"> Speaking Style</a>, <a href="https://publications.waset.org/search?q=Speech%20Production" title=" Speech Production"> Speech Production</a>, <a href="https://publications.waset.org/search?q=Speech%20Perception." title=" Speech Perception."> Speech Perception.</a> </p> <a href="https://publications.waset.org/4782/transformation-of-vocal-characteristics-a-review-of-literature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4782/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4782/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4782/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4782/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4782/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4782/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4782/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4782/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4782/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4782/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4782.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2043</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">159</span> The Functions of the Student Voice and Student-Centered Teaching Practices in Classroom-Based Music Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sofia%20Douklia">Sofia Douklia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The present context paper aims to present the important role of ‘student voice’ and the music teacher in the classroom, which contributes to more student-centered music education. The aim is to focus on the functions of the student voice through the music spectrum, which has been born in the music classroom, and the teacher’s methodologies and techniques used in the music classroom. The music curriculum, the principles of student-centered music education, and the role of students and teachers as music ambassadors have been considered the major music parameters of student voice. The student- voice is a worth-mentioning aspect of a student-centered education, and all teachers should consider and promote its existence in their classroom.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Student%E2%80%99s%20voice" title="Student’s voice">Student’s voice</a>, <a href="https://publications.waset.org/search?q=student-centered%20education" title=" student-centered education"> student-centered education</a>, <a href="https://publications.waset.org/search?q=music%20ambassadors" title=" music ambassadors"> music ambassadors</a>, <a href="https://publications.waset.org/search?q=music%20teachers." title=" music teachers."> music teachers.</a> </p> <a href="https://publications.waset.org/10013235/the-functions-of-the-student-voice-and-student-centered-teaching-practices-in-classroom-based-music-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10013235/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10013235/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10013235/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10013235/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10013235/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10013235/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10013235/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10013235/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10013235/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10013235/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10013235.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">158</span> A Review: Comparative Analysis of Arduino Micro Controllers in Robotic Car</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=C.%20Rajan">C. Rajan</a>, <a href="https://publications.waset.org/search?q=B.%20Megala"> B. Megala</a>, <a href="https://publications.waset.org/search?q=A.%20Nandhini"> A. Nandhini</a>, <a href="https://publications.waset.org/search?q=C.%20Rasi%20Priya"> C. Rasi Priya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Robotics brings together several very different engineering areas and skills. There are various types of robot such as humanoid robot, mobile robots, remotely operated vehicles, modern autonomous robots etc. This survey paper advocates the operation of a robotic car (remotely operated vehicle) that is controlled by a mobile phone (communicate on a large scale over a large distance even from different cities). The person makes a call to the mobile phone placed in the car. In the case of a call, if any one of the button is pressed, a tone equivalent to the button pressed is heard at the other end of the call. This tone is known as DTMF (Dual Tone Multiple Frequency). The car recognizes this DTMF tone with the help of the phone stacked in the car. The received tone is processed by the Arduino microcontroller. The microcontroller is programmed to acquire a decision for any given input and outputs its decision to motor drivers in order to drive the motors in the forward direction or backward direction or left or right direction. The mobile phone that makes a call to cell phone stacked in the car act as a remote.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Arduino%20Micro-controller" title="Arduino Micro-controller">Arduino Micro-controller</a>, <a href="https://publications.waset.org/search?q=Arduino%20UNO" title=" Arduino UNO"> Arduino UNO</a>, <a href="https://publications.waset.org/search?q=DTMF" title=" DTMF"> DTMF</a>, <a href="https://publications.waset.org/search?q=Mobile%20phone" title=" Mobile phone"> Mobile phone</a>, <a href="https://publications.waset.org/search?q=Robotic%20car." title=" Robotic car."> Robotic car.</a> </p> <a href="https://publications.waset.org/10001073/a-review-comparative-analysis-of-arduino-micro-controllers-in-robotic-car" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10001073/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10001073/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10001073/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10001073/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10001073/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10001073/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10001073/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10001073/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10001073/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10001073/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10001073.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">4236</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">157</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Extreme%20learning" title="Extreme learning">Extreme learning</a>, <a href="https://publications.waset.org/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/search?q=voice%20recognition." title=" voice recognition. "> voice recognition. </a> </p> <a href="https://publications.waset.org/10010982/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10010982/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10010982/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10010982/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10010982/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10010982/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10010982/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10010982/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10010982/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10010982/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10010982/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10010982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">764</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">156</span> PAPR Reduction of FBMC Using Sliding Window Tone Reservation Active Constellation Extension Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=V.%20Sandeep%20Kumar">V. Sandeep Kumar</a>, <a href="https://publications.waset.org/search?q=S.%20Anuradha"> S. Anuradha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The high Peak to Average Power Ratio (PAR) in Filter Bank Multicarrier with Offset Quadrature Amplitude Modulation (FBMC-OQAM) can significantly reduce power efficiency and performance. In this paper, we address the problem of PAPR reduction for FBMC-OQAM systems using Tone Reservation (TR) technique. Due to the overlapping structure of FBMCOQAM signals, directly applying TR schemes of OFDM systems to FBMC-OQAM systems is not effective. We improve the tone reservation (TR) technique by employing sliding window with Active Constellation Extension for the PAPR reduction of FBMC-OQAM signals, called sliding window tone reservation Active Constellation Extension (SW-TRACE) technique. The proposed SW-TRACE technique uses the peak reduction tones (PRTs) of several consecutive data blocks to cancel the peaks of the FBMC-OQAM signal inside a window, with dynamically extending outer constellation points in active(data-carrying) channels, within margin-preserving constraints, in order to minimize the peak magnitude. Analysis and simulation results compared to the existing Tone Reservation (TR) technique for FBMC/OQAM system. The proposed method SW-TRACE has better PAPR performance and lower computational complexity.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=FBMC-OQAM" title="FBMC-OQAM">FBMC-OQAM</a>, <a href="https://publications.waset.org/search?q=peak-to-average%20power%20ratio" title=" peak-to-average power ratio"> peak-to-average power ratio</a>, <a href="https://publications.waset.org/search?q=sliding%0D%0Awindow" title=" sliding window"> sliding window</a>, <a href="https://publications.waset.org/search?q=tone%20reservation%20Active%20Constellation%20Extension." title=" tone reservation Active Constellation Extension."> tone reservation Active Constellation Extension.</a> </p> <a href="https://publications.waset.org/9999475/papr-reduction-of-fbmc-using-sliding-window-tone-reservation-active-constellation-extension-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999475/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999475/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999475/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999475/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999475/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999475/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999475/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999475/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999475/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999475/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2839</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">155</span> Automatic Voice Classification System Based on Traditional Korean Medicine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jaehwan%20Kang">Jaehwan Kang</a>, <a href="https://publications.waset.org/search?q=Haejung%20Lee"> Haejung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an automatic voice classification system for the diagnosis of individual constitution based on Sasang Constitutional Medicine (SCM) in Traditional Korean Medicine (TKM). For the developing of this algorithm, we used the voices of 309 female speakers and extracted a total of 134 speech features from the voice data consisting of 5 sustained vowels and one sentence. The classification system, based on a rule-based algorithm that is derived from a non parametric statistical method, presents 3 types of decisions: reserved, positive and negative decisions. In conclusion, 71.5% of the voice data were diagnosed by this system, of which 47.7% were correct positive decisions and 69.7% were correct negative decisions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Voice%20Classifier" title="Voice Classifier">Voice Classifier</a>, <a href="https://publications.waset.org/search?q=Sasang%20Constitution%20Medicine" title=" Sasang Constitution Medicine"> Sasang Constitution Medicine</a>, <a href="https://publications.waset.org/search?q=Traditional%20Korean%20Medicine" title=" Traditional Korean Medicine"> Traditional Korean Medicine</a>, <a href="https://publications.waset.org/search?q=SCM" title=" SCM"> SCM</a>, <a href="https://publications.waset.org/search?q=TKM." title=" TKM."> TKM.</a> </p> <a href="https://publications.waset.org/5204/automatic-voice-classification-system-based-on-traditional-korean-medicine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5204/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5204/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5204/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5204/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5204/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5204/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5204/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5204/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5204/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5204/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5204.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1389</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">154</span> Recognition by Online Modeling – a New Approach of Recognizing Voice Signals in Linear Time</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jyh-Da%20Wei">Jyh-Da Wei</a>, <a href="https://publications.waset.org/search?q=Hsin-Chen%20Tsai"> Hsin-Chen Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a novel means of extracting fixedlength parameters from voice signals, such that words can be recognized in linear time. The power and the zero crossing rate are first calculated segment by segment from a voice signal; by doing so, two feature sequences are generated. We then construct an FIR system across these two sequences. The parameters of this FIR system, used as the input of a multilayer proceptron recognizer, can be derived by recursive LSE (least-square estimation), implying that the complexity of overall process is linear to the signal size. In the second part of this work, we introduce a weighting factor λ to emphasize recent input; therefore, we can further recognize continuous speech signals. Experiments employ the voice signals of numbers, from zero to nine, spoken in Mandarin Chinese. The proposed method is verified to recognize voice signals efficiently and accurately. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20Recognition" title="Speech Recognition">Speech Recognition</a>, <a href="https://publications.waset.org/search?q=FIR%20system" title=" FIR system"> FIR system</a>, <a href="https://publications.waset.org/search?q=Recursive%20LSE" title=" Recursive LSE"> Recursive LSE</a>, <a href="https://publications.waset.org/search?q=Multilayer%20Perceptron" title=" Multilayer Perceptron"> Multilayer Perceptron</a> </p> <a href="https://publications.waset.org/3663/recognition-by-online-modeling-a-new-approach-of-recognizing-voice-signals-in-linear-time" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3663/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3663/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3663/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3663/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3663/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3663/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3663/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3663/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3663/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3663/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3663.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1417</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">153</span> Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nazaket%20Gazieva">Nazaket Gazieva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometric%20voice%20prints" title="Biometric voice prints">Biometric voice prints</a>, <a href="https://publications.waset.org/search?q=fundamental%20frequency" title=" fundamental frequency"> fundamental frequency</a>, <a href="https://publications.waset.org/search?q=phonogram" title=" phonogram"> phonogram</a>, <a href="https://publications.waset.org/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/search?q=temporal%20characteristics." title=" temporal characteristics."> temporal characteristics.</a> </p> <a href="https://publications.waset.org/10011375/minimum-data-of-a-speech-signal-as-special-indicators-of-identification-in-phonoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011375/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011375/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011375/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011375/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011375/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011375/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011375/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011375/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011375/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011375/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011375.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">577</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">152</span> A Survey on Voice over IP over Wireless LANs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Haniyeh%20Kazemitabar">Haniyeh Kazemitabar</a>, <a href="https://publications.waset.org/search?q=Sameha%20Ahmed"> Sameha Ahmed</a>, <a href="https://publications.waset.org/search?q=Kashif%20Nisar"> Kashif Nisar</a>, <a href="https://publications.waset.org/search?q=Abas%20B%20Said"> Abas B Said</a>, <a href="https://publications.waset.org/search?q=Halabi%20B%20Hasbullah"> Halabi B Hasbullah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice over Internet Protocol (VoIP) is a form of voice communication that uses audio data to transmit voice signals to the end user. VoIP is one of the most important technologies in the World of communication. Around, 20 years of research on VoIP, some problems of VoIP are still remaining. During the past decade and with growing of wireless technologies, we have seen that many papers turn their concentration from Wired-LAN to Wireless-LAN. VoIP over Wireless LAN (WLAN) faces many challenges due to the loose nature of wireless network. Issues like providing Quality of Service (QoS) at a good level, dedicating capacity for calls and having secure calls is more difficult rather than wired LAN. Therefore VoIP over WLAN (VoWLAN) remains a challenging research topic. In this paper we consolidate and address major VoWLAN issues. This research is helpful for those researchers wants to do research in Voice over IP technology over WLAN network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Capacity" title="Capacity">Capacity</a>, <a href="https://publications.waset.org/search?q=QoS" title=" QoS"> QoS</a>, <a href="https://publications.waset.org/search?q=Security" title=" Security"> Security</a>, <a href="https://publications.waset.org/search?q=VoIP%20Issues" title=" VoIP Issues"> VoIP Issues</a>, <a href="https://publications.waset.org/search?q=WLAN." title=" WLAN."> WLAN.</a> </p> <a href="https://publications.waset.org/12438/a-survey-on-voice-over-ip-over-wireless-lans" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12438/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12438/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12438/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12438/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12438/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12438/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12438/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12438/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12438/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12438/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2245</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">151</span> VoIP Source Model based on the Hyperexponential Distribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Arkadiusz%20Biernacki">Arkadiusz Biernacki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a statistical analysis of Voice over IP (VoIP) packet streams produced by the G.711 voice coder with voice activity detection (VAD). During telephone conversation, depending whether the interlocutor speaks (ON) or remains silent (OFF), packets are produced or not by a voice coder. As index of dispersion for both ON and OFF times distribution was greater than one, we used hyperexponential distribution for approximation of streams duration. For each stage of the hyperexponential distribution, we tested goodness of our fits using graphical methods, we calculated estimation errors, and performed Kolmogorov-Smirnov test. Obtained results showed that the precise VoIP source model can be based on the five-state Markov process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=VoIP%20source%20modelling" title="VoIP source modelling">VoIP source modelling</a>, <a href="https://publications.waset.org/search?q=distribution%20approximation" title=" distribution approximation"> distribution approximation</a>, <a href="https://publications.waset.org/search?q=hyperexponential%20distribution." title=" hyperexponential distribution."> hyperexponential distribution.</a> </p> <a href="https://publications.waset.org/10390/voip-source-model-based-on-the-hyperexponential-distribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10390/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10390/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10390/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10390/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10390/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10390/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10390/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10390/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10390/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10390/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10390.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1710</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">150</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Emotion%20recognition" title="Emotion recognition">Emotion recognition</a>, <a href="https://publications.waset.org/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/search?q=machine%20learning." title=" machine learning. "> machine learning. </a> </p> <a href="https://publications.waset.org/10004221/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004221/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004221/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004221/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004221/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004221/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004221/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004221/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004221/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004221/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004221/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2018</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">149</span> Secure peerTalk Using PEERT System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nebu%20Tom%20John">Nebu Tom John</a>, <a href="https://publications.waset.org/search?q=N.%20Dhinakaran"> N. Dhinakaran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Multiparty voice over IP (MVoIP) systems allows a group of people to freely communicate each other via the internet, which have many applications such as online gaming, teleconferencing, online stock trading etc. Peertalk is a peer to peer multiparty voice over IP system (MVoIP) which is more feasible than existing approaches such as p2p overlay multicast and coupled distributed processing. Since the stream mixing and distribution are done by the peers, it is vulnerable to major security threats like nodes misbehavior, eavesdropping, Sybil attacks, Denial of Service (DoS), call tampering, Man in the Middle attacks etc. To thwart the security threats, a security framework called PEERTS (PEEred Reputed Trustworthy System for peertalk) is implemented so that efficient and secure communication can be carried out between peers.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Key%20management%20system" title="Key management system">Key management system</a>, <a href="https://publications.waset.org/search?q=peer-to-peer%20voice%0D%0Astreaming" title=" peer-to-peer voice streaming"> peer-to-peer voice streaming</a>, <a href="https://publications.waset.org/search?q=reputed%20trust%20management%20system" title=" reputed trust management system"> reputed trust management system</a>, <a href="https://publications.waset.org/search?q=voice-over-IP." title=" voice-over-IP."> voice-over-IP.</a> </p> <a href="https://publications.waset.org/10427/secure-peertalk-using-peert-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10427/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10427/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10427/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10427/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10427/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10427/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10427/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10427/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10427/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10427/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10427.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1882</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">148</span> Search Engine Module in Voice Recognition Browser to Facilitate the Visually Impaired in Virtual Learning (MGSYS VISI-VL)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nurulisma%20Ismail">Nurulisma Ismail</a>, <a href="https://publications.waset.org/search?q=Halimah%20Badioze%20Zaman"> Halimah Badioze Zaman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, web-based technologies influence in people-s daily life such as in education, business and others. Therefore, many web developers are too eager to develop their web applications with fully animation graphics and forgetting its accessibility to its users. Their purpose is to make their web applications look impressive. Thus, this paper would highlight on the usability and accessibility of a voice recognition browser as a tool to facilitate the visually impaired and blind learners in accessing virtual learning environment. More specifically, the objectives of the study are (i) to explore the challenges faced by the visually impaired learners in accessing virtual learning environment (ii) to determine the suitable guidelines for developing a voice recognition browser that is accessible to the visually impaired. Furthermore, this study was prepared based on an observation conducted with the Malaysian visually impaired learners. Finally, the result of this study would underline on the development of an accessible voice recognition browser for the visually impaired. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Accessibility" title="Accessibility">Accessibility</a>, <a href="https://publications.waset.org/search?q=Usability" title=" Usability"> Usability</a>, <a href="https://publications.waset.org/search?q=Virtual%20Learning" title=" Virtual Learning"> Virtual Learning</a>, <a href="https://publications.waset.org/search?q=Visually%0AImpaired" title=" Visually Impaired"> Visually Impaired</a>, <a href="https://publications.waset.org/search?q=Voice%20Recognition." title=" Voice Recognition."> Voice Recognition.</a> </p> <a href="https://publications.waset.org/5268/search-engine-module-in-voice-recognition-browser-to-facilitate-the-visually-impaired-in-virtual-learning-mgsys-visi-vl" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5268/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5268/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5268/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5268/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5268/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5268/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5268/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5268/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5268/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5268/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5268.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2040</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">147</span> Computationally Efficient Signal Quality Improvement Method for VoIP System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=H.%20P.%20Singh">H. P. Singh</a>, <a href="https://publications.waset.org/search?q=S.%20Singh"> S. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The voice signal in Voice over Internet protocol (VoIP) system is processed through the best effort policy based IP network, which leads to the network degradations including delay, packet loss jitter. The work in this paper presents the implementation of finite impulse response (FIR) filter for voice quality improvement in the VoIP system through distributed arithmetic (DA) algorithm. The VoIP simulations are conducted with AMR-NB 6.70 kbps and G.729a speech coders at different packet loss rates and the performance of the enhanced VoIP signal is evaluated using the perceptual evaluation of speech quality (PESQ) measurement for narrowband signal. The results show reduction in the computational complexity in the system and significant improvement in the quality of the VoIP voice signal.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=VoIP" title="VoIP">VoIP</a>, <a href="https://publications.waset.org/search?q=Signal%20Quality" title=" Signal Quality"> Signal Quality</a>, <a href="https://publications.waset.org/search?q=Distributed%20Arithmetic" title=" Distributed Arithmetic"> Distributed Arithmetic</a>, <a href="https://publications.waset.org/search?q=Packet%20Loss" title=" Packet Loss"> Packet Loss</a>, <a href="https://publications.waset.org/search?q=Speech%20Coder." title=" Speech Coder."> Speech Coder.</a> </p> <a href="https://publications.waset.org/6730/computationally-efficient-signal-quality-improvement-method-for-voip-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6730/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6730/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6730/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6730/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6730/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6730/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6730/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6730/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6730/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6730/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6730.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1830</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">146</span> Analysis of Vocal Fold Vibrations from High-Speed Digital Images Based On Dynamic Time Warping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20I.%20A.%20Rahman">A. I. A. Rahman</a>, <a href="https://publications.waset.org/search?q=Sh-Hussain%20Salleh"> Sh-Hussain Salleh</a>, <a href="https://publications.waset.org/search?q=K.%20Ahmad"> K. Ahmad</a>, <a href="https://publications.waset.org/search?q=K.%20Anuar"> K. Anuar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Analysis of vocal fold vibration is essential for understanding the mechanism of voice production and for improving clinical assessment of voice disorders. This paper presents a Dynamic Time Warping (DTW) based approach to analyze and objectively classify vocal fold vibration patterns. The proposed technique was designed and implemented on a Glottal Area Waveform (GAW) extracted from high-speed laryngeal images by delineating the glottal edges for each image frame. Feature extraction from the GAW was performed using Linear Predictive Coding (LPC). Several types of voice reference templates from simulations of clear, breathy, fry, pressed and hyperfunctional voice productions were used. The patterns of the reference templates were first verified using the analytical signal generated through Hilbert transformation of the GAW. Samples from normal speakers’ voice recordings were then used to evaluate and test the effectiveness of this approach. The classification of the voice patterns using the technique of LPC and DTW gave the accuracy of 81%.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Dynamic%20Time%20Warping" title="Dynamic Time Warping">Dynamic Time Warping</a>, <a href="https://publications.waset.org/search?q=Glottal%20Area%20Waveform" title=" Glottal Area Waveform"> Glottal Area Waveform</a>, <a href="https://publications.waset.org/search?q=Linear%20Predictive%20Coding" title=" Linear Predictive Coding"> Linear Predictive Coding</a>, <a href="https://publications.waset.org/search?q=High-Speed%20Laryngeal%20Images" title=" High-Speed Laryngeal Images"> High-Speed Laryngeal Images</a>, <a href="https://publications.waset.org/search?q=Hilbert%20Transform." title=" Hilbert Transform."> Hilbert Transform.</a> </p> <a href="https://publications.waset.org/9998404/analysis-of-vocal-fold-vibrations-from-high-speed-digital-images-based-on-dynamic-time-warping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998404/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998404/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998404/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998404/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998404/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998404/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998404/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998404/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998404/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998404/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2334</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">145</span> Voice Command Recognition System Based on MFCC and VQ Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mahdi%20Shaneh">Mahdi Shaneh</a>, <a href="https://publications.waset.org/search?q=Azizollah%20Taheri"> Azizollah Taheri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this project is to design a system to recognition voice commands. Most of voice recognition systems contain two main modules as follow “feature extraction" and “feature matching". In this project, MFCC algorithm is used to simulate feature extraction module. Using this algorithm, the cepstral coefficients are calculated on mel frequency scale. VQ (vector quantization) method will be used for reduction of amount of data to decrease computation time. In the feature matching stage Euclidean distance is applied as similarity criterion. Because of high accuracy of used algorithms, the accuracy of this voice command system is high. Using these algorithms, by at least 5 times repetition for each command, in a single training session, and then twice in each testing session zero error rate in recognition of commands is achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=MFCC" title="MFCC">MFCC</a>, <a href="https://publications.waset.org/search?q=Vector%20quantization" title=" Vector quantization"> Vector quantization</a>, <a href="https://publications.waset.org/search?q=Vocal%20tract" title=" Vocal tract"> Vocal tract</a>, <a href="https://publications.waset.org/search?q=Voicecommand." title=" Voicecommand."> Voicecommand.</a> </p> <a href="https://publications.waset.org/4967/voice-command-recognition-system-based-on-mfcc-and-vq-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4967/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4967/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4967/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4967/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4967/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4967/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4967/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4967/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4967/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4967/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3157</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">144</span> The Performance Analysis of CSS-based Communication Systems in the Jamming Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Youngpo%20Lee">Youngpo Lee</a>, <a href="https://publications.waset.org/search?q=Sanghun%20Kim"> Sanghun Kim</a>, <a href="https://publications.waset.org/search?q=Youngyoon%20Lee"> Youngyoon Lee</a>, <a href="https://publications.waset.org/search?q=Seokho%20Yoon"> Seokho Yoon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to its capability to resist jamming signals, chirp spread spectrum (CSS) technique has attracted much attention in the area of wireless communications. However, there has been little rigorous analysis for the performance of the CSS communication system in jamming environments. In this paper, we present analytic results on the performance of a CSS system by deriving symbol error rate (SER) expressions for a CSS M-ary phase shift keying (MPSK) system in the presence of broadband and tone jamming signals, respectively. The numerical results show that the empirical SER closely agrees with the analytic result. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=CSS" title="CSS">CSS</a>, <a href="https://publications.waset.org/search?q=DM" title=" DM"> DM</a>, <a href="https://publications.waset.org/search?q=jamming" title=" jamming"> jamming</a>, <a href="https://publications.waset.org/search?q=broadband%20jamming" title=" broadband jamming"> broadband jamming</a>, <a href="https://publications.waset.org/search?q=tone%20jamming." title=" tone jamming."> tone jamming.</a> </p> <a href="https://publications.waset.org/12792/the-performance-analysis-of-css-based-communication-systems-in-the-jamming-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12792/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12792/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12792/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12792/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12792/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12792/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12792/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12792/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12792/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12792/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12792.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1641</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">143</span> Independent Encryption Technique for Mobile Voice Calls</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nael%20Hirzalla">Nael Hirzalla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The legality of some countries or agencies’ acts to spy on personal phone calls of the public became a hot topic to many social groups’ talks. It is believed that this act is considered an invasion to someone’s privacy. Such act may be justified if it is singling out specific cases but to spy without limits is very unacceptable. This paper discusses the needs for not only a simple and light weight technique to secure mobile voice calls but also a technique that is independent from any encryption standard or library. It then presents and tests one encrypting algorithm that is based of Frequency scrambling technique to show fair and delay-free process that can be used to protect phone calls from such spying acts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Frequency%20Scrambling" title="Frequency Scrambling">Frequency Scrambling</a>, <a href="https://publications.waset.org/search?q=Mobile%20Applications" title=" Mobile Applications"> Mobile Applications</a>, <a href="https://publications.waset.org/search?q=Real-%0D%0ATime%20Voice%20Encryption" title=" Real- Time Voice Encryption"> Real- Time Voice Encryption</a>, <a href="https://publications.waset.org/search?q=Spying%20on%20Calls." title=" Spying on Calls."> Spying on Calls.</a> </p> <a href="https://publications.waset.org/10001839/independent-encryption-technique-for-mobile-voice-calls" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10001839/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10001839/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10001839/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10001839/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10001839/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10001839/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10001839/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10001839/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10001839/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10001839/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10001839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2557</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">142</span> The Phonology and Phonetics of Second Language Intonation in Case of “Downstep”</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tayebeh%20Norouzi">Tayebeh Norouzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This study aims to investigate the acquisition process of intonation. It examines the intonation structure of Tokyo Japanese and its realization by Iranian learners of Japanese. Seven Iranian learners of Japanese, differing in fluency, and two Japanese speakers participated in the experiment. Two sentences were used to test the phonological and phonetic characteristics of lexical pitch-accent as well as the intonation patterns produced by the speakers. Both sentences consisted of similar words with the same number of syllables and lexical pitch-accents but different syntactic structure. Speakers were asked to read each sentence three times at normal speed, and the data were analyzed by Praat. The results show that lexical pitch-accent, Accentual Phrase (AP) and AP boundary tone realization vary depending on sentence type. For sentences of type <em>XdeYwo</em>, the lexical pitch-accent is realized properly. However, there is a rise in AP boundary tone regardless of speakers’ level of fluency. In contrast, in sentences of type <em>XnoYwo</em>, the lexical pitch-accent and AP boundary tone vary depending on the speakers’ fluency level. Advanced speakers are better at grouping words into phrases and produce more native-like intonation patterns, though they are not able to realize downstep properly. The non-native speakers tried to realize proper intonation patterns by making changes in lexical accent and boundary tone.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Intonation" title="Intonation">Intonation</a>, <a href="https://publications.waset.org/search?q=Iranian%20learners" title=" Iranian learners"> Iranian learners</a>, <a href="https://publications.waset.org/search?q=Japanese%20prosody" title=" Japanese prosody"> Japanese prosody</a>, <a href="https://publications.waset.org/search?q=lexical%20accent" title=" lexical accent"> lexical accent</a>, <a href="https://publications.waset.org/search?q=second%20language%20acquisition." title=" second language acquisition. "> second language acquisition. </a> </p> <a href="https://publications.waset.org/10009466/the-phonology-and-phonetics-of-second-language-intonation-in-case-of-downstep" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10009466/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10009466/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10009466/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10009466/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10009466/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10009466/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10009466/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10009466/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10009466/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10009466/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10009466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">988</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">141</span> A Security Model of Voice Eavesdropping Protection over Digital Networks </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Supachai%20Tangwongsan">Supachai Tangwongsan</a>, <a href="https://publications.waset.org/search?q=Sathaporn%20Kassuvan"> Sathaporn Kassuvan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Computer%20Security" title="Computer Security">Computer Security</a>, <a href="https://publications.waset.org/search?q=Encryption" title=" Encryption"> Encryption</a>, <a href="https://publications.waset.org/search?q=Key%20Exchange" title=" Key Exchange"> Key Exchange</a>, <a href="https://publications.waset.org/search?q=Security%20Model" title="Security Model">Security Model</a>, <a href="https://publications.waset.org/search?q=Voice%20Eavesdropping." title=" Voice Eavesdropping."> Voice Eavesdropping.</a> </p> <a href="https://publications.waset.org/3240/a-security-model-of-voice-eavesdropping-protection-over-digital-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3240/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3240/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3240/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3240/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3240/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3240/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3240/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3240/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3240/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3240/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1581</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">140</span> High-Individuality Voice Conversion Based on Concatenative Speech Synthesis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kei%20Fujii">Kei Fujii</a>, <a href="https://publications.waset.org/search?q=Jun%20Okawa"> Jun Okawa</a>, <a href="https://publications.waset.org/search?q=Kaori%20Suigetsu"> Kaori Suigetsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Concatenative speech synthesis is a method that can make speech sound which has naturalness and high-individuality of a speaker by introducing a large speech corpus. Based on this method, in this paper, we propose a voice conversion method whose conversion speech has high-individuality and naturalness. The authors also have two subjective evaluation experiments for evaluating individuality and sound quality of conversion speech. From the results, following three facts have be confirmed: (a) the proposal method can convert the individuality of speakers well, (b) employing the framework of unit selection (especially join cost) of concatenative speech synthesis into conventional voice conversion improves the sound quality of conversion speech, and (c) the proposal method is robust against the difference of genders between a source speaker and a target speaker. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=concatenative%20speech%20synthesis" title="concatenative speech synthesis">concatenative speech synthesis</a>, <a href="https://publications.waset.org/search?q=join%20cost" title=" join cost"> join cost</a>, <a href="https://publications.waset.org/search?q=speaker%20individuality" title=" speaker individuality"> speaker individuality</a>, <a href="https://publications.waset.org/search?q=unit%20selection" title=" unit selection"> unit selection</a>, <a href="https://publications.waset.org/search?q=voice%20conversion" title=" voice conversion"> voice conversion</a> </p> <a href="https://publications.waset.org/1272/high-individuality-voice-conversion-based-on-concatenative-speech-synthesis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1272/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1272/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1272/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1272/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1272/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1272/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1272/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1272/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1272/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1272/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1272.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1939</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">139</span> Automotive 3-Microphone Noise Canceller in a Frequently Moving Noise Source Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Z.%20Qi">Z. Qi</a>, <a href="https://publications.waset.org/search?q=T.%20J.%20Moir"> T. J. Moir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A combined three-microphone voice activity detector (VAD) and noise-canceling system is studied to enhance speech recognition in an automobile environment. A previous experiment clearly shows the ability of the composite system to cancel a single noise source outside of a defined zone. This paper investigates the performance of the composite system when there are frequently moving noise sources (noise sources are coming from different locations but are not always presented at the same time) e.g. there is other passenger speech or speech from a radio when a desired speech is presented. To work in a frequently moving noise sources environment, whilst a three-microphone voice activity detector (VAD) detects voice from a “VAD valid zone", the 3-microphone noise canceller uses a “noise canceller valid zone" defined in freespace around the users head. Therefore, a desired voice should be in the intersection of the noise canceller valid zone and VAD valid zone. Thus all noise is suppressed outside this intersection of area. Experiments are shown for a real environment e.g. all results were recorded in a car by omni-directional electret condenser microphones.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Signal%20processing" title="Signal processing">Signal processing</a>, <a href="https://publications.waset.org/search?q=voice%20activity%20detection" title=" voice activity detection"> voice activity detection</a>, <a href="https://publications.waset.org/search?q=noise%20canceller" title=" noise canceller"> noise canceller</a>, <a href="https://publications.waset.org/search?q=microphone%20array%20beam%20forming." title=" microphone array beam forming."> microphone array beam forming.</a> </p> <a href="https://publications.waset.org/12708/automotive-3-microphone-noise-canceller-in-a-frequently-moving-noise-source-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12708/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12708/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12708/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12708/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12708/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12708/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12708/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12708/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12708/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12708/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12708.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1612</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">138</span> Voice Driven Applications in Non-stationary and Chaotic Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=C.%20Kwan">C. Kwan</a>, <a href="https://publications.waset.org/search?q=X.%20Li"> X. Li</a>, <a href="https://publications.waset.org/search?q=D.%20Lao"> D. Lao</a>, <a href="https://publications.waset.org/search?q=Y.%20Deng"> Y. Deng</a>, <a href="https://publications.waset.org/search?q=Z.%20Ren"> Z. Ren</a>, <a href="https://publications.waset.org/search?q=B.%20Raj"> B. Raj</a>, <a href="https://publications.waset.org/search?q=R.%20Singh"> R. Singh</a>, <a href="https://publications.waset.org/search?q=R.%20Stern"> R. Stern</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Automated operations based on voice commands will become more and more important in many applications, including robotics, maintenance operations, etc. However, voice command recognition rates drop quite a lot under non-stationary and chaotic noise environments. In this paper, we tried to significantly improve the speech recognition rates under non-stationary noise environments. First, 298 Navy acronyms have been selected for automatic speech recognition. Data sets were collected under 4 types of noisy environments: factory, buccaneer jet, babble noise in a canteen, and destroyer. Within each noisy environment, 4 levels (5 dB, 15 dB, 25 dB, and clean) of Signal-to-Noise Ratio (SNR) were introduced to corrupt the speech. Second, a new algorithm to estimate speech or no speech regions has been developed, implemented, and evaluated. Third, extensive simulations were carried out. It was found that the combination of the new algorithm, the proper selection of language model and a customized training of the speech recognizer based on clean speech yielded very high recognition rates, which are between 80% and 90% for the four different noisy conditions. Fourth, extensive comparative studies have also been carried out.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Non-stationary" title="Non-stationary">Non-stationary</a>, <a href="https://publications.waset.org/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/search?q=voice%20commands." title=" voice commands."> voice commands.</a> </p> <a href="https://publications.waset.org/9177/voice-driven-applications-in-non-stationary-and-chaotic-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9177/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9177/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9177/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9177/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9177/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9177/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9177/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9177/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9177/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9177/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9177.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1533</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">137</span> Measurement of Rheologic Properties of Soft Tissue (Muscle Tissue) by Myotonometer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Petr%20%C5%A0ifta">Petr Šifta</a>, <a href="https://publications.waset.org/search?q=V%C3%A1clav%20Bittner"> Václav Bittner</a>, <a href="https://publications.waset.org/search?q=Martin%20Kysela"> Martin Kysela</a>, <a href="https://publications.waset.org/search?q=Mat%C4%9Bj%20Kol%C3%A1%C5%99"> Matěj Kolář</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The purpose of the research described in this work is to answer how to measure the rheologic (viscoelastic) properties tendo–deformational characteristics of soft tissue. The method would also resemble muscle palpation examination as it is known in clinical practice. For this purpose, an instrument with the working name “myotonometer” has been used. At present, there is lack of objective methods for assessing the muscle tone by viscous and elastic properties of soft tissue. That is why we decided to focus on creating or finding quantitative and qualitative methodology capable to specify muscle tone.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Rheologicproperties" title="Rheologicproperties">Rheologicproperties</a>, <a href="https://publications.waset.org/search?q=tendo%E2%80%93deformational%0D%0Acharacteristics" title=" tendo–deformational characteristics"> tendo–deformational characteristics</a>, <a href="https://publications.waset.org/search?q=viscosity" title=" viscosity"> viscosity</a>, <a href="https://publications.waset.org/search?q=elasticity" title=" elasticity"> elasticity</a>, <a href="https://publications.waset.org/search?q=hypertonus" title=" hypertonus"> hypertonus</a>, <a href="https://publications.waset.org/search?q=spasticity." title=" spasticity."> spasticity.</a> </p> <a href="https://publications.waset.org/10003342/measurement-of-rheologic-properties-of-soft-tissue-muscle-tissue-by-myotonometer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10003342/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10003342/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10003342/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10003342/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10003342/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10003342/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10003342/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10003342/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10003342/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10003342/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10003342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1995</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=tone%20of%20voice&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=tone%20of%20voice&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=tone%20of%20voice&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=tone%20of%20voice&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=tone%20of%20voice&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=tone%20of%20voice&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>