CINXE.COM
Search results for: perceptual speech coding
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: perceptual speech coding</title> <meta name="description" content="Search results for: perceptual speech coding"> <meta name="keywords" content="perceptual speech coding"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="perceptual speech coding" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="perceptual speech coding"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 548</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: perceptual speech coding</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">548</span> High Quality Speech Coding using Combined Parametric and Perceptual Modules</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Kulesza">M. Kulesza</a>, <a href="https://publications.waset.org/search?q=G.%20Szwoch"> G. Szwoch</a>, <a href="https://publications.waset.org/search?q=A.%20Czy%C5%BCewski"> A. Czyżewski</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A novel approach to speech coding using the hybrid architecture is presented. Advantages of parametric and perceptual coding methods are utilized together in order to create a speech coding algorithm assuring better signal quality than in traditional CELP parametric codec. Two approaches are discussed. One is based on selection of voiced signal components that are encoded using parametric algorithm, unvoiced components that are encoded perceptually and transients that remain unencoded. The second approach uses perceptual encoding of the residual signal in CELP codec. The algorithm applied for precise transient selection is described. Signal quality achieved using the proposed hybrid codec is compared to quality of some standard speech codecs.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=CELP%20residual%20coding" title="CELP residual coding">CELP residual coding</a>, <a href="https://publications.waset.org/search?q=hybrid%20codec%20architecture" title=" hybrid codec architecture"> hybrid codec architecture</a>, <a href="https://publications.waset.org/search?q=perceptual%20speech%20coding" title=" perceptual speech coding"> perceptual speech coding</a>, <a href="https://publications.waset.org/search?q=speech%20codecs%20comparison." title=" speech codecs comparison."> speech codecs comparison.</a> </p> <a href="https://publications.waset.org/959/high-quality-speech-coding-using-combined-parametric-and-perceptual-modules" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/959/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/959/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/959/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/959/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/959/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/959/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/959/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/959/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/959/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/959/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/959.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1530</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">547</span> From Maskee to Audible Noise in Perceptual Speech Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Asmaa%20Amehraye">Asmaa Amehraye</a>, <a href="https://publications.waset.org/search?q=Dominique%20Pastor"> Dominique Pastor</a>, <a href="https://publications.waset.org/search?q=Ahmed%20Tamtaoui"> Ahmed Tamtaoui</a>, <a href="https://publications.waset.org/search?q=Driss%20Aboutajdine"> Driss Aboutajdine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A new analysis of perceptual speech enhancement is presented. It focuses on the fact that if only noise above the masking threshold is filtered, then noise below the masking threshold, but above the absolute threshold of hearing, can become audible after the masker filtering. This particular drawback of some perceptual filters, hereafter called the maskee-to-audible-noise (MAN) phenomenon, favours the emergence of isolated tonals that increase musical noise. Two filtering techniques that avoid or correct the MAN phenomenon are proposed to effectively suppress background noise without introducing much distortion. Experimental results, including objective and subjective measurements, show that these techniques improve the enhanced speech quality and the gain they bring emphasizes the importance of the MAN phenomenon. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Perceptual%20speech%20filtering" title="Perceptual speech filtering">Perceptual speech filtering</a>, <a href="https://publications.waset.org/search?q=maskee%20to%20audible%20noise" title=" maskee to audible noise"> maskee to audible noise</a>, <a href="https://publications.waset.org/search?q=distorsion" title="distorsion">distorsion</a>, <a href="https://publications.waset.org/search?q=musical%20noise." title=" musical noise."> musical noise.</a> </p> <a href="https://publications.waset.org/10552/from-maskee-to-audible-noise-in-perceptual-speech-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10552/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10552/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10552/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10552/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10552/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10552/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10552/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10552/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10552/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10552/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1492</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">546</span> Effect of Visual Speech in Sign Speech Synthesis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Zdenek%20Krnoul">Zdenek Krnoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This article investigates a contribution of synthesized visual speech. Synthesis of visual speech expressed by a computer consists in an animation in particular movements of lips. Visual speech is also necessary part of the non-manual component of a sign language. Appropriate methodology is proposed to determine the quality and the accuracy of synthesized visual speech. Proposed methodology is inspected on Czech speech. Hence, this article presents a procedure of recording of speech data in order to set a synthesis system as well as to evaluate synthesized speech. Furthermore, one option of the evaluation process is elaborated in the form of a perceptual test. This test procedure is verified on the measured data with two settings of the synthesis system. The results of the perceptual test are presented as a statistically significant increase of intelligibility evoked by real and synthesized visual speech. Now, the aim is to show one part of evaluation process which leads to more comprehensive evaluation of the sign speech synthesis system.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Perception%20test" title="Perception test">Perception test</a>, <a href="https://publications.waset.org/search?q=Sign%20speech%20synthesis" title=" Sign speech synthesis"> Sign speech synthesis</a>, <a href="https://publications.waset.org/search?q=Talking%20head" title=" Talking head"> Talking head</a>, <a href="https://publications.waset.org/search?q=Visual%20speech." title=" Visual speech."> Visual speech.</a> </p> <a href="https://publications.waset.org/4362/effect-of-visual-speech-in-sign-speech-synthesis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4362/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4362/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4362/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4362/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4362/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4362/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4362/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4362/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4362/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4362/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1477</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">545</span> A High Quality Speech Coder at 600 bps</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yong%20Zhang">Yong Zhang</a>, <a href="https://publications.waset.org/search?q=Ruimin%20Hu"> Ruimin Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a vocoder to obtain high quality synthetic speech at 600 bps. To reduce the bit rate, the algorithm is based on a sinusoidally excited linear prediction model which extracts few coding parameters, and three consecutive frames are grouped into a superframe and jointly vector quantization is used to obtain high coding efficiency. The inter-frame redundancy is exploited with distinct quantization schemes for different unvoiced/voiced frame combinations in the superframe. Experimental results show that the quality of the proposed coder is better than that of 2.4kbps LPC10e and achieves approximately the same as that of 2.4kbps MELP and with high robustness.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20coding" title="Speech coding">Speech coding</a>, <a href="https://publications.waset.org/search?q=Vector%20quantization" title=" Vector quantization"> Vector quantization</a>, <a href="https://publications.waset.org/search?q=linear%20predicition" title=" linear predicition"> linear predicition</a>, <a href="https://publications.waset.org/search?q=Mixed%20sinusoidal%20excitation" title=" Mixed sinusoidal excitation"> Mixed sinusoidal excitation</a> </p> <a href="https://publications.waset.org/5166/a-high-quality-speech-coder-at-600-bps" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5166/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5166/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5166/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5166/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5166/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5166/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5166/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5166/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5166/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5166/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5166.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2188</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">544</span> Speech Coding and Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Satya%20Sai%20Ram">M. Satya Sai Ram</a>, <a href="https://publications.waset.org/search?q=P.%20Siddaiah"> P. Siddaiah</a>, <a href="https://publications.waset.org/search?q=M.%20Madhavi%20Latha"> M. Madhavi Latha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the performance of a speech recognizer in an interactive voice response system for various coded speech signals, coded by using a vector quantization technique namely Multi Switched Split Vector Quantization Technique. The process of recognizing the coded output can be used in Voice banking application. The recognition technique used for the recognition of the coded speech signals is the Hidden Markov Model technique. The spectral distortion performance, computational complexity, and memory requirements of Multi Switched Split Vector Quantization Technique and the performance of the speech recognizer at various bit rates have been computed. From results it is found that the speech recognizer is showing better performance at 24 bits/frame and it is found that the percentage of recognition is being varied from 100% to 93.33% for various bit rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Linear%20predictive%20coding" title="Linear predictive coding">Linear predictive coding</a>, <a href="https://publications.waset.org/search?q=Speech%20Recognition" title=" Speech Recognition"> Speech Recognition</a>, <a href="https://publications.waset.org/search?q=Voice%0D%0Abanking" title=" Voice banking"> Voice banking</a>, <a href="https://publications.waset.org/search?q=Multi%20Switched%20Split%20Vector%20Quantization" title=" Multi Switched Split Vector Quantization"> Multi Switched Split Vector Quantization</a>, <a href="https://publications.waset.org/search?q=Hidden%20Markov%0D%0AModel" title=" Hidden Markov Model"> Hidden Markov Model</a>, <a href="https://publications.waset.org/search?q=Linear%20Predictive%20Coefficients." title=" Linear Predictive Coefficients."> Linear Predictive Coefficients.</a> </p> <a href="https://publications.waset.org/3722/speech-coding-and-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3722/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3722/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3722/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3722/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3722/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3722/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3722/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3722/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3722/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3722/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3722.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1845</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">543</span> Voice Features as the Diagnostic Marker of Autism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Elena%20Lyakso">Elena Lyakso</a>, <a href="https://publications.waset.org/search?q=Olga%20Frolova"> Olga Frolova</a>, <a href="https://publications.waset.org/search?q=Yuri%20Matveev"> Yuri Matveev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The aim of the study is to determine the acoustic features of voice and speech of children with autism spectrum disorders (ASD) as a possible additional diagnostic criterion. The participants in the study were 95 children with ASD aged 5-16 years, 150 typically development (TD) children, and 103 adults – listening to children’s speech samples. Three types of experimental methods for speech analysis were performed: spectrographic, perceptual by listeners, and automatic recognition. In the speech of children with ASD, the pitch values, pitch range, values of frequency and intensity of the third formant (emotional) leading to the “atypical” spectrogram of vowels are higher than corresponding parameters in the speech of TD children. High values of vowel articulation index (VAI) are specific for ASD children’s speech signals. These acoustic features can be considered as diagnostic marker of autism. The ability of humans and automatic recognition of the psychoneurological state of children via their speech is determined.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Autism%20spectrum%20disorders" title="Autism spectrum disorders">Autism spectrum disorders</a>, <a href="https://publications.waset.org/search?q=biomarker%20of%20autism" title=" biomarker of autism"> biomarker of autism</a>, <a href="https://publications.waset.org/search?q=child%20speech" title=" child speech"> child speech</a>, <a href="https://publications.waset.org/search?q=voice%20features." title=" voice features."> voice features.</a> </p> <a href="https://publications.waset.org/10012604/voice-features-as-the-diagnostic-marker-of-autism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012604/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012604/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012604/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012604/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012604/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012604/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012604/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012604/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012604/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012604/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">619</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">542</span> On the Effectivity of Different Pseudo-Noise and Orthogonal Sequences for Speech Encryption from Correlation Properties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=V.%20Anil%20Kumar">V. Anil Kumar</a>, <a href="https://publications.waset.org/search?q=Abhijit%20Mitra"> Abhijit Mitra</a>, <a href="https://publications.waset.org/search?q=S.%20R.%20Mahadeva%20Prasanna"> S. R. Mahadeva Prasanna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>We analyze the effectivity of different pseudo noise (PN) and orthogonal sequences for encrypting speech signals in terms of perceptual intelligence. Speech signal can be viewed as sequence of correlated samples and each sample as sequence of bits. The residual intelligibility of the speech signal can be reduced by removing the correlation among the speech samples. PN sequences have random like properties that help in reducing the correlation among speech samples. The mean square aperiodic auto-correlation (MSAAC) and the mean square aperiodic cross-correlation (MSACC) measures are used to test the randomness of the PN sequences. Results of the investigation show the effectivity of large Kasami sequences for this purpose among many PN sequences.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20encryption" title="Speech encryption">Speech encryption</a>, <a href="https://publications.waset.org/search?q=pseudo-noise%20codes" title=" pseudo-noise codes"> pseudo-noise codes</a>, <a href="https://publications.waset.org/search?q=maximallength" title=" maximallength"> maximallength</a>, <a href="https://publications.waset.org/search?q=Gold" title=" Gold"> Gold</a>, <a href="https://publications.waset.org/search?q=Barker" title=" Barker"> Barker</a>, <a href="https://publications.waset.org/search?q=Kasami" title=" Kasami"> Kasami</a>, <a href="https://publications.waset.org/search?q=Walsh-Hadamard" title=" Walsh-Hadamard"> Walsh-Hadamard</a>, <a href="https://publications.waset.org/search?q=autocorrelation" title=" autocorrelation"> autocorrelation</a>, <a href="https://publications.waset.org/search?q=crosscorrelation" title="crosscorrelation">crosscorrelation</a>, <a href="https://publications.waset.org/search?q=figure%20of%20merit." title=" figure of merit."> figure of merit.</a> </p> <a href="https://publications.waset.org/6316/on-the-effectivity-of-different-pseudo-noise-and-orthogonal-sequences-for-speech-encryption-from-correlation-properties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6316/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6316/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6316/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6316/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6316/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6316/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6316/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6316/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6316/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6316/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6316.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2041</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">541</span> Orchestra/Percussion Classification Algorithm for United Speech Audio Coding System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yueming%20Wang">Yueming Wang</a>, <a href="https://publications.waset.org/search?q=Rendong%20Ying"> Rendong Ying</a>, <a href="https://publications.waset.org/search?q=Sumxin%20Jiang"> Sumxin Jiang</a>, <a href="https://publications.waset.org/search?q=Peilin%20Liu"> Peilin Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Unified Speech Audio Coding (USAC), the latest MPEG standardization for unified speech and audio coding, uses a speech/audio classification algorithm to distinguish speech and audio segments of the input signal. The quality of the recovered audio can be increased by well-designed orchestra/percussion classification and subsequent processing. However, owing to the shortcoming of the system, introducing an orchestra/percussion classification and modifying subsequent processing can enormously increase the quality of the recovered audio. This paper proposes an orchestra/percussion classification algorithm for the USAC system which only extracts 3 scales of Mel-Frequency Cepstral Coefficients (MFCCs) rather than traditional 13 scales of MFCCs and use Iterative Dichotomiser 3 (ID3) Decision Tree rather than other complex learning method, thus the proposed algorithm has lower computing complexity than most existing algorithms. Considering that frequent changing of attributes may lead to quality loss of the recovered audio signal, this paper also design a modified subsequent process to help the whole classification system reach an accurate rate as high as 97% which is comparable to classical 99%.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=ID3%20Decision%20Tree" title="ID3 Decision Tree">ID3 Decision Tree</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=Orchestra%2FPercussion%20Classification" title=" Orchestra/Percussion Classification"> Orchestra/Percussion Classification</a>, <a href="https://publications.waset.org/search?q=USAC" title=" USAC"> USAC</a> </p> <a href="https://publications.waset.org/16076/orchestrapercussion-classification-algorithm-for-united-speech-audio-coding-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/16076/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/16076/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/16076/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/16076/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/16076/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/16076/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/16076/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/16076/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/16076/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/16076/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/16076.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1673</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">540</span> Speech Data Compression using Vector Quantization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=H.%20B.%20Kekre">H. B. Kekre</a>, <a href="https://publications.waset.org/search?q=Tanuja%20K.%20Sarode"> Tanuja K. Sarode</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table shows computational complexity of these three algorithms. Here we have introduced a new performance parameter Average Fractional Change in Speech Sample (AFCSS). Our FCG algorithm gives far better performance considering mean absolute error, AFCSS and complexity as compared to others. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Vector%20Quantization" title="Vector Quantization">Vector Quantization</a>, <a href="https://publications.waset.org/search?q=Data%20Compression" title=" Data Compression"> Data Compression</a>, <a href="https://publications.waset.org/search?q=Encoding" title=" Encoding"> Encoding</a>, <a href="https://publications.waset.org/search?q=" title=""></a>, <a href="https://publications.waset.org/search?q=Speech%20coding." title=" Speech coding."> Speech coding.</a> </p> <a href="https://publications.waset.org/15794/speech-data-compression-using-vector-quantization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/15794/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/15794/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/15794/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/15794/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/15794/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/15794/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/15794/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/15794/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/15794/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/15794/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/15794.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2403</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">539</span> Investigation of Combined use of MFCC and LPC Features in Speech Recognition Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=%D0%9A.%20R.%20Aida%E2%80%93Zade">К. R. Aida–Zade</a>, <a href="https://publications.waset.org/search?q=C.%20Ardil"> C. Ardil</a>, <a href="https://publications.waset.org/search?q=S.%20S.%20Rustamov"> S. S. Rustamov </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Statement of the automatic speech recognition problem, the assignment of speech recognition and the application fields are shown in the paper. At the same time as Azerbaijan speech, the establishment principles of speech recognition system and the problems arising in the system are investigated. The computing algorithms of speech features, being the main part of speech recognition system, are analyzed. From this point of view, the determination algorithms of Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC) coefficients expressing the basic speech features are developed. Combined use of cepstrals of MFCC and LPC in speech recognition system is suggested to improve the reliability of speech recognition system. To this end, the recognition system is divided into MFCC and LPC-based recognition subsystems. The training and recognition processes are realized in both subsystems separately, and recognition system gets the decision being the same results of each subsystems. This results in decrease of error rate during recognition. The training and recognition processes are realized by artificial neural networks in the automatic speech recognition system. The neural networks are trained by the conjugate gradient method. In the paper the problems observed by the number of speech features at training the neural networks of MFCC and LPC-based speech recognition subsystems are investigated. The variety of results of neural networks trained from different initial points in training process is analyzed. Methodology of combined use of neural networks trained from different initial points in speech recognition system is suggested to improve the reliability of recognition system and increase the recognition quality, and obtained practical results are shown. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20recognition" title="Speech recognition">Speech recognition</a>, <a href="https://publications.waset.org/search?q=cepstral%20analysis" title=" cepstral analysis"> cepstral analysis</a>, <a href="https://publications.waset.org/search?q=Voice%0D%0Aactivation%20detection%20algorithm" title=" Voice activation detection algorithm"> Voice activation detection algorithm</a>, <a href="https://publications.waset.org/search?q=Mel%20Frequency%20Cepstral%0D%0ACoefficients" title=" Mel Frequency Cepstral Coefficients"> Mel Frequency Cepstral Coefficients</a>, <a href="https://publications.waset.org/search?q=features%20of%20speech" title=" features of speech"> features of speech</a>, <a href="https://publications.waset.org/search?q=Cepstral%20Mean%20Subtraction" title=" Cepstral Mean Subtraction"> Cepstral Mean Subtraction</a>, <a href="https://publications.waset.org/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/search?q=Linear%20Predictive%20Coding." title=" Linear Predictive Coding."> Linear Predictive Coding.</a> </p> <a href="https://publications.waset.org/10008323/investigation-of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008323/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008323/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008323/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008323/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008323/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008323/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008323/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008323/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008323/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008323/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">913</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">538</span> Perceptual JPEG Compliant Coding by Using DCT-Based Visibility Thresholds of Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kuo-Cheng%20Liu"> Kuo-Cheng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Just-noticeable%20distortion%20%28JND%29" title=" Just-noticeable distortion (JND)"> Just-noticeable distortion (JND)</a>, <a href="https://publications.waset.org/search?q=discrete%20cosine%20transform%20%28DCT%29" title=" discrete cosine transform (DCT)"> discrete cosine transform (DCT)</a>, <a href="https://publications.waset.org/search?q=JPEG." title=" JPEG."> JPEG.</a> </p> <a href="https://publications.waset.org/16121/perceptual-jpeg-compliant-coding-by-using-dct-based-visibility-thresholds-of-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/16121/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/16121/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/16121/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/16121/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/16121/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/16121/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/16121/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/16121/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/16121/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/16121/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/16121.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2581</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">537</span> Multi Switched Split Vector Quantization of Narrowband Speech Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Satya%20Sai%20Ram">M. Satya Sai Ram</a>, <a href="https://publications.waset.org/search?q=P.%20Siddaiah"> P. Siddaiah</a>, <a href="https://publications.waset.org/search?q=M.%20Madhavi%20Latha"> M. Madhavi Latha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vector quantization is a powerful tool for speech coding applications. This paper deals with LPC Coding of speech signals which uses a new technique called Multi Switched Split Vector Quantization (MSSVQ), which is a hybrid of Multi, switched, split vector quantization techniques. The spectral distortion performance, computational complexity, and memory requirements of MSSVQ are compared to split vector quantization (SVQ), multi stage vector quantization(MSVQ) and switched split vector quantization (SSVQ) techniques. It has been proved from results that MSSVQ has better spectral distortion performance, lower computational complexity and lower memory requirements when compared to all the above mentioned product code vector quantization techniques. Computational complexity is measured in floating point operations (flops), and memory requirements is measured in (floats). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Linear%20predictive%20Coding" title="Linear predictive Coding">Linear predictive Coding</a>, <a href="https://publications.waset.org/search?q=Multi%20stage%20vectorquantization" title=" Multi stage vectorquantization"> Multi stage vectorquantization</a>, <a href="https://publications.waset.org/search?q=Switched%20Split%20vector%20quantization" title=" Switched Split vector quantization"> Switched Split vector quantization</a>, <a href="https://publications.waset.org/search?q=Split%20vectorquantization" title=" Split vectorquantization"> Split vectorquantization</a>, <a href="https://publications.waset.org/search?q=Line%20Spectral%20Frequencies%20%28LSF%29." title=" Line Spectral Frequencies (LSF)."> Line Spectral Frequencies (LSF).</a> </p> <a href="https://publications.waset.org/7600/multi-switched-split-vector-quantization-of-narrowband-speech-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7600/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7600/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7600/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7600/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7600/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7600/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7600/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7600/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7600/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7600/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7600.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1672</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">536</span> A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20Bajit">A. Bajit</a>, <a href="https://publications.waset.org/search?q=M.%20Nahid"> M. Nahid</a>, <a href="https://publications.waset.org/search?q=A.%20Tamtaoui"> A. Tamtaoui</a>, <a href="https://publications.waset.org/search?q=E.%20H.%20Bouyakhf"> E. H. Bouyakhf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=DWT" title="DWT">DWT</a>, <a href="https://publications.waset.org/search?q=linear-phase%209%2F7%20filter" title=" linear-phase 9/7 filter"> linear-phase 9/7 filter</a>, <a href="https://publications.waset.org/search?q=9%2F7%20Wavelets%20Error%20Sensitivity%20WES" title=" 9/7 Wavelets Error Sensitivity WES"> 9/7 Wavelets Error Sensitivity WES</a>, <a href="https://publications.waset.org/search?q=CSF%20implementation%20approaches" title=" CSF implementation approaches"> CSF implementation approaches</a>, <a href="https://publications.waset.org/search?q=JND%20Just%20Noticeable%20Difference" title=" JND Just Noticeable Difference"> JND Just Noticeable Difference</a>, <a href="https://publications.waset.org/search?q=Luminance%20masking" title=" Luminance masking"> Luminance masking</a>, <a href="https://publications.waset.org/search?q=Contrast%20masking" title=" Contrast masking"> Contrast masking</a>, <a href="https://publications.waset.org/search?q=standard%20SPIHT" title=" standard SPIHT"> standard SPIHT</a>, <a href="https://publications.waset.org/search?q=Objective%20Quality%20Measure" title=" Objective Quality Measure"> Objective Quality Measure</a>, <a href="https://publications.waset.org/search?q=Probability%20Score%20PS." title=" Probability Score PS."> Probability Score PS.</a> </p> <a href="https://publications.waset.org/767/a-perceptually-optimized-wavelet-embedded-zero-tree-image-coder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/767/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/767/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/767/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/767/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/767/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/767/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/767/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/767/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/767/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/767/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/767.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2051</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">535</span> Coding of DWT Coefficients using Run-length Coding and Huffman Coding for the Purpose of Color Image Compression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Varun%20Setia">Varun Setia</a>, <a href="https://publications.waset.org/search?q=Vinod%20Kumar"> Vinod Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it-s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Lossy%20compression" title="Lossy compression">Lossy compression</a>, <a href="https://publications.waset.org/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/search?q=quantization" title=" quantization"> quantization</a>, <a href="https://publications.waset.org/search?q=Run%20length%20coding" title=" Run length coding"> Run length coding</a>, <a href="https://publications.waset.org/search?q=Huffman%20coding" title=" Huffman coding"> Huffman coding</a>, <a href="https://publications.waset.org/search?q=JPEG2000." title=" JPEG2000."> JPEG2000.</a> </p> <a href="https://publications.waset.org/12609/coding-of-dwt-coefficients-using-run-length-coding-and-huffman-coding-for-the-purpose-of-color-image-compression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12609/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12609/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12609/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12609/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12609/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12609/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12609/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12609/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12609/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12609/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12609.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2922</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">534</span> Subjective Evaluation of Spectral and Time Domain Cascading Algorithm for Speech Enhancement for Mobile Communication </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Harish%20Chander">Harish Chander</a>, <a href="https://publications.waset.org/search?q=Balwinder%20Singh"> Balwinder Singh</a>, <a href="https://publications.waset.org/search?q=Ravinder%20Khanna"> Ravinder Khanna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we present the comparative subjective analysis of Improved Minima Controlled Recursive Averaging (IMCRA) Algorithm, the Kalman filter and the cascading of IMCRA and Kalman filter algorithms. Performance of speech enhancement algorithms can be predicted in two different ways. One is the objective method of evaluation in which the speech quality parameters are predicted computationally. The second is a subjective listening test in which the processed speech signal is subjected to the listeners who judge the quality of speech on certain parameters. The comparative objective evaluation of these algorithms was analyzed in terms of Global SNR, Segmental SNR and Perceptual Evaluation of Speech Quality (PESQ) by the authors and it was reported that with cascaded algorithms there is a substantial increase in objective parameters. Since subjective evaluation is the real test to judge the quality of speech enhancement algorithms, the authenticity of superiority of cascaded algorithms over individual IMCRA and Kalman algorithms is tested through subjective analysis in this paper. The results of subjective listening tests have confirmed that the cascaded algorithms perform better under all types of noise conditions.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20enhancement" title="Speech enhancement">Speech enhancement</a>, <a href="https://publications.waset.org/search?q=spectral%20domain" title=" spectral domain"> spectral domain</a>, <a href="https://publications.waset.org/search?q=time%20domain" title=" time domain"> time domain</a>, <a href="https://publications.waset.org/search?q=PESQ" title=" PESQ"> PESQ</a>, <a href="https://publications.waset.org/search?q=subjective%20analysis" title=" subjective analysis"> subjective analysis</a>, <a href="https://publications.waset.org/search?q=objective%20analysis." title=" objective analysis. "> objective analysis. </a> </p> <a href="https://publications.waset.org/10008232/subjective-evaluation-of-spectral-and-time-domain-cascading-algorithm-for-speech-enhancement-for-mobile-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008232/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008232/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008232/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008232/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008232/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008232/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008232/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008232/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008232/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008232/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1231</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">533</span> Near-Lossless Image Coding based on Orthogonal Polynomials</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Krishnamoorthy%20R">Krishnamoorthy R</a>, <a href="https://publications.waset.org/search?q=Rajavijayalakshmi%20K"> Rajavijayalakshmi K</a>, <a href="https://publications.waset.org/search?q=Punidha%20R"> Punidha R</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a near lossless image coding scheme based on Orthogonal Polynomials Transform (OPT) has been presented. The polynomial operators and polynomials basis operators are obtained from set of orthogonal polynomials functions for the proposed transform coding. The image is partitioned into a number of distinct square blocks and the proposed transform coding is applied to each of these individually. After applying the proposed transform coding, the transformed coefficients are rearranged into a sub-band structure. The Embedded Zerotree (EZ) coding algorithm is then employed to quantize the coefficients. The proposed transform is implemented for various block sizes and the performance is compared with existing Discrete Cosine Transform (DCT) transform coding scheme. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Near-lossless%20Coding" title="Near-lossless Coding">Near-lossless Coding</a>, <a href="https://publications.waset.org/search?q=Orthogonal%20Polynomials%0ATransform" title=" Orthogonal Polynomials Transform"> Orthogonal Polynomials Transform</a>, <a href="https://publications.waset.org/search?q=Embedded%20Zerotree%20Coding" title=" Embedded Zerotree Coding"> Embedded Zerotree Coding</a> </p> <a href="https://publications.waset.org/13669/near-lossless-image-coding-based-on-orthogonal-polynomials" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13669/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13669/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13669/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13669/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13669/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13669/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13669/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13669/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13669/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13669/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13669.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1944</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">532</span> A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20Bajit">A. Bajit</a>, <a href="https://publications.waset.org/search?q=M.%20Nahid"> M. Nahid</a>, <a href="https://publications.waset.org/search?q=A.%20Tamtaoui"> A. Tamtaoui</a>, <a href="https://publications.waset.org/search?q=E.%20H.%20Bouyakhf"> E. H. Bouyakhf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=DWT" title="DWT">DWT</a>, <a href="https://publications.waset.org/search?q=linear-phase%209%2F7%20filter" title=" linear-phase 9/7 filter"> linear-phase 9/7 filter</a>, <a href="https://publications.waset.org/search?q=Foveation%20Filtering" title=" Foveation Filtering"> Foveation Filtering</a>, <a href="https://publications.waset.org/search?q=CSF%20implementation%20approaches" title=" CSF implementation approaches"> CSF implementation approaches</a>, <a href="https://publications.waset.org/search?q=9%2F7%20Wavelet%20JND%20Thresholds%20and%20Wavelet%20Error%20Sensitivity%20WES" title=" 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES"> 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES</a>, <a href="https://publications.waset.org/search?q=Luminance%20and%20Contrast%20masking" title=" Luminance and Contrast masking"> Luminance and Contrast masking</a>, <a href="https://publications.waset.org/search?q=standard%20SPIHT" title=" standard SPIHT"> standard SPIHT</a>, <a href="https://publications.waset.org/search?q=Objective%20Quality%20Measure" title=" Objective Quality Measure"> Objective Quality Measure</a>, <a href="https://publications.waset.org/search?q=Probability%20Score%20PS." title=" Probability Score PS."> Probability Score PS.</a> </p> <a href="https://publications.waset.org/4900/a-perceptually-optimized-foveation-based-wavelet-embedded-zero-tree-image-coding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4900/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4900/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4900/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4900/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4900/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4900/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4900/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4900/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4900/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4900/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4900.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1795</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">531</span> A Perceptual Image Coding method of High Compression Rate</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fahmi%20Kammoun">Fahmi Kammoun</a>, <a href="https://publications.waset.org/search?q=Mohamed%20Salim%20Bouhlel"> Mohamed Salim Bouhlel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the framework of the image compression by Wavelet Transforms, we propose a perceptual method by incorporating Human Visual System (HVS) characteristics in the quantization stage. Indeed, human eyes haven-t an equal sensitivity across the frequency bandwidth. Therefore, the clarity of the reconstructed images can be improved by weighting the quantization according to the Contrast Sensitivity Function (CSF). The visual artifact at low bit rate is minimized. To evaluate our method, we use the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria witch takes into account visual criteria. The experimental results illustrate that our technique shows improvement on image quality at the same compression ratio. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Contrast%20Sensitivity%20Function" title="Contrast Sensitivity Function">Contrast Sensitivity Function</a>, <a href="https://publications.waset.org/search?q=Human%20Visual%0ASystem" title=" Human Visual System"> Human Visual System</a>, <a href="https://publications.waset.org/search?q=Image%20compression" title=" Image compression"> Image compression</a>, <a href="https://publications.waset.org/search?q=Wavelet%20transforms." title=" Wavelet transforms."> Wavelet transforms.</a> </p> <a href="https://publications.waset.org/14648/a-perceptual-image-coding-method-of-high-compression-rate" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14648/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14648/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14648/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14648/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14648/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14648/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14648/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14648/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14648/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14648/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1874</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">530</span> Computationally Efficient Signal Quality Improvement Method for VoIP System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=H.%20P.%20Singh">H. P. Singh</a>, <a href="https://publications.waset.org/search?q=S.%20Singh"> S. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The voice signal in Voice over Internet protocol (VoIP) system is processed through the best effort policy based IP network, which leads to the network degradations including delay, packet loss jitter. The work in this paper presents the implementation of finite impulse response (FIR) filter for voice quality improvement in the VoIP system through distributed arithmetic (DA) algorithm. The VoIP simulations are conducted with AMR-NB 6.70 kbps and G.729a speech coders at different packet loss rates and the performance of the enhanced VoIP signal is evaluated using the perceptual evaluation of speech quality (PESQ) measurement for narrowband signal. The results show reduction in the computational complexity in the system and significant improvement in the quality of the VoIP voice signal.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=VoIP" title="VoIP">VoIP</a>, <a href="https://publications.waset.org/search?q=Signal%20Quality" title=" Signal Quality"> Signal Quality</a>, <a href="https://publications.waset.org/search?q=Distributed%20Arithmetic" title=" Distributed Arithmetic"> Distributed Arithmetic</a>, <a href="https://publications.waset.org/search?q=Packet%20Loss" title=" Packet Loss"> Packet Loss</a>, <a href="https://publications.waset.org/search?q=Speech%20Coder." title=" Speech Coder."> Speech Coder.</a> </p> <a href="https://publications.waset.org/6730/computationally-efficient-signal-quality-improvement-method-for-voip-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6730/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6730/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6730/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6730/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6730/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6730/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6730/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6730/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6730/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6730/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6730.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1830</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">529</span> Peak-to-Average Power Ratio Reduction in OFDM Systems using Huffman Coding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ashraf%20A.%20Eltholth">Ashraf A. Eltholth</a>, <a href="https://publications.waset.org/search?q=Adel%20R.%20Mikhail"> Adel R. Mikhail</a>, <a href="https://publications.waset.org/search?q=A.%20Elshirbini"> A. Elshirbini</a>, <a href="https://publications.waset.org/search?q=Moawad%20I.%20Moawad"> Moawad I. Moawad</a>, <a href="https://publications.waset.org/search?q=A.%20I.%20Abdelfattah"> A. I. Abdelfattah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we proposed the use of Huffman coding to reduce the PAR of an OFDM system as a distortionless scrambling technique, and we utilize the amount saved in the total bit rate by the Huffman coding to send the encoding table for accurate decoding at the receiver without reducing the effective throughput. We found that the use of Huffman coding reduces the PAR by about 6 dB. Also we have investigated the effect of PAR reduction due to Huffman coding through testing the spectral spreading and the inband distortion due to HPA with different IBO values. We found a complete match of our expectation from the proposed solution with the obtained simulation results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=HPA" title="HPA">HPA</a>, <a href="https://publications.waset.org/search?q=Huffman%20coding" title=" Huffman coding"> Huffman coding</a>, <a href="https://publications.waset.org/search?q=OFDM" title=" OFDM"> OFDM</a>, <a href="https://publications.waset.org/search?q=PAR" title=" PAR"> PAR</a> </p> <a href="https://publications.waset.org/1174/peak-to-average-power-ratio-reduction-in-ofdm-systems-using-huffman-coding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1174/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1174/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1174/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1174/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1174/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1174/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1174/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1174/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1174/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1174/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1174.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2597</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">528</span> Watermark Bit Rate in Diverse Signal Domains</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nedeljko%20Cvejic">Nedeljko Cvejic</a>, <a href="https://publications.waset.org/search?q=Tapio%20Sepp"> Tapio Sepp</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A study of the obtainable watermark data rate for information hiding algorithms is presented in this paper. As the perceptual entropy for wideband monophonic audio signals is in the range of four to five bits per sample, a significant amount of additional information can be inserted into signal without causing any perceptual distortion. Experimental results showed that transform domain watermark embedding outperforms considerably watermark embedding in time domain and that signal decompositions with a high gain of transform coding, like the wavelet transform, are the most suitable for high data rate information hiding. Keywords?Digital watermarking, information hiding, audio watermarking, watermark data rate.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Digital%20watermarking" title="Digital watermarking">Digital watermarking</a>, <a href="https://publications.waset.org/search?q=information%20hiding" title=" information hiding"> information hiding</a>, <a href="https://publications.waset.org/search?q=audio%20watermarking" title=" audio watermarking"> audio watermarking</a>, <a href="https://publications.waset.org/search?q=watermark%20data%20rate." title=" watermark data rate."> watermark data rate.</a> </p> <a href="https://publications.waset.org/11687/watermark-bit-rate-in-diverse-signal-domains" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11687/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11687/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11687/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11687/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11687/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11687/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11687/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11687/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11687/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11687/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11687.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1628</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">527</span> Eukaryotic Gene Prediction by an Investigation of Nonlinear Dynamical Modeling Techniques on EIIP Coded Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mai%20S.%20Mabrouk">Mai S. Mabrouk</a>, <a href="https://publications.waset.org/search?q=Nahed%20H.%20Solouma"> Nahed H. Solouma</a>, <a href="https://publications.waset.org/search?q=Abou-Bakr%20M.%20Youssef"> Abou-Bakr M. Youssef</a>, <a href="https://publications.waset.org/search?q=Yasser%20M.%20Kadah"> Yasser M. Kadah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Many digital signal processing, techniques have been used to automatically distinguish protein coding regions (exons) from non-coding regions (introns) in DNA sequences. In this work, we have characterized these sequences according to their nonlinear dynamical features such as moment invariants, correlation dimension, and largest Lyapunov exponent estimates. We have applied our model to a number of real sequences encoded into a time series using EIIP sequence indicators. In order to discriminate between coding and non coding DNA regions, the phase space trajectory was first reconstructed for coding and non-coding regions. Nonlinear dynamical features are extracted from those regions and used to investigate a difference between them. Our results indicate that the nonlinear dynamical characteristics have yielded significant differences between coding (CR) and non-coding regions (NCR) in DNA sequences. Finally, the classifier is tested on real genes where coding and non-coding regions are well known.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gene%20prediction" title="Gene prediction">Gene prediction</a>, <a href="https://publications.waset.org/search?q=nonlinear%20dynamics" title=" nonlinear dynamics"> nonlinear dynamics</a>, <a href="https://publications.waset.org/search?q=correlation%20dimension" title=" correlation dimension"> correlation dimension</a>, <a href="https://publications.waset.org/search?q=Lyapunov%20exponent." title=" Lyapunov exponent."> Lyapunov exponent.</a> </p> <a href="https://publications.waset.org/9460/eukaryotic-gene-prediction-by-an-investigation-of-nonlinear-dynamical-modeling-techniques-on-eiip-coded-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9460/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9460/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9460/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9460/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9460/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9460/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9460/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9460/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9460/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9460/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9460.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1825</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">526</span> Automatic Recognition of Emotionally Coloured Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Theologos%20Athanaselis">Theologos Athanaselis</a>, <a href="https://publications.waset.org/search?q=Stelios%20Bakamidis"> Stelios Bakamidis</a>, <a href="https://publications.waset.org/search?q=Ioannis%20Dologlou"> Ioannis Dologlou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Emotion in speech is an issue that has been attracting the interest of the speech community for many years, both in the context of speech synthesis as well as in automatic speech recognition (ASR). In spite of the remarkable recent progress in Large Vocabulary Recognition (LVR), it is still far behind the ultimate goal of recognising free conversational speech uttered by any speaker in any environment. Current experimental tests prove that using state of the art large vocabulary recognition systems the error rate increases substantially when applied to spontaneous/emotional speech. This paper shows that recognition rate for emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Statistical%20language%20model" title="Statistical language model">Statistical language model</a>, <a href="https://publications.waset.org/search?q=N-grams" title=" N-grams"> N-grams</a>, <a href="https://publications.waset.org/search?q=emotionallycoloured%20speech" title=" emotionallycoloured speech"> emotionallycoloured speech</a> </p> <a href="https://publications.waset.org/1891/automatic-recognition-of-emotionally-coloured-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1891/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1891/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1891/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1891/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1891/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1891/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1891/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1891/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1891/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1891/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1891.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1618</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">525</span> MIM: A Species Independent Approach for Classifying Coding and Non-Coding DNA Sequences in Bacterial and Archaeal Genomes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Achraf%20El%20Allali">Achraf El Allali</a>, <a href="https://publications.waset.org/search?q=John%20R.%20Rose"> John R. Rose</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A number of competing methodologies have been developed to identify genes and classify DNA sequences into coding and non-coding sequences. This classification process is fundamental in gene finding and gene annotation tools and is one of the most challenging tasks in bioinformatics and computational biology. An information theory measure based on mutual information has shown good accuracy in classifying DNA sequences into coding and noncoding. In this paper we describe a species independent iterative approach that distinguishes coding from non-coding sequences using the mutual information measure (MIM). A set of sixty prokaryotes is used to extract universal training data. To facilitate comparisons with the published results of other researchers, a test set of 51 bacterial and archaeal genomes was used to evaluate MIM. These results demonstrate that MIM produces superior results while remaining species independent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Coding%20Non-coding%20Classification" title="Coding Non-coding Classification">Coding Non-coding Classification</a>, <a href="https://publications.waset.org/search?q=Entropy" title=" Entropy"> Entropy</a>, <a href="https://publications.waset.org/search?q=GeneRecognition" title=" GeneRecognition"> GeneRecognition</a>, <a href="https://publications.waset.org/search?q=Mutual%20Information." title=" Mutual Information."> Mutual Information.</a> </p> <a href="https://publications.waset.org/9008/mim-a-species-independent-approach-for-classifying-coding-and-non-coding-dna-sequences-in-bacterial-and-archaeal-genomes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9008/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9008/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9008/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9008/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9008/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9008/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9008/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9008/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9008/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9008/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9008.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1727</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">524</span> Speaker Identification using Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=R.V%20Pawar">R.V Pawar</a>, <a href="https://publications.waset.org/search?q=P.P.Kajave"> P.P.Kajave</a>, <a href="https://publications.waset.org/search?q=S.N.Mali"> S.N.Mali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech signal conveys information about the identity of the speaker. The area of speaker identification is concerned with extracting the identity of the person speaking the utterance. As speech interaction with computers becomes more pervasive in activities such as the telephone, financial transactions and information retrieval from speech databases, the utility of automatically identifying a speaker is based solely on vocal characteristic. This paper emphasizes on text dependent speaker identification, which deals with detecting a particular speaker from a known population. The system prompts the user to provide speech utterance. System identifies the user by comparing the codebook of speech utterance with those of the stored in the database and lists, which contain the most likely speakers, could have given that speech utterance. The speech signal is recorded for N speakers further the features are extracted. Feature extraction is done by means of LPC coefficients, calculating AMDF, and DFT. The neural network is trained by applying these features as input parameters. The features are stored in templates for further comparison. The features for the speaker who has to be identified are extracted and compared with the stored templates using Back Propogation Algorithm. Here, the trained network corresponds to the output; the input is the extracted features of the speaker to be identified. The network does the weight adjustment and the best match is found to identify the speaker. The number of epochs required to get the target decides the network performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Average%20Mean%20Distance%20function" title="Average Mean Distance function">Average Mean Distance function</a>, <a href="https://publications.waset.org/search?q=Backpropogation" title="Backpropogation">Backpropogation</a>, <a href="https://publications.waset.org/search?q=Linear%20Predictive%20Coding" title=" Linear Predictive Coding"> Linear Predictive Coding</a>, <a href="https://publications.waset.org/search?q=MultilayeredPerceptron" title=" MultilayeredPerceptron"> MultilayeredPerceptron</a>, <a href="https://publications.waset.org/search?q=" title=""></a> </p> <a href="https://publications.waset.org/4977/speaker-identification-using-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4977/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4977/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4977/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4977/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4977/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4977/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4977/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4977/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4977/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4977/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4977.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1893</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">523</span> NewPerceptual Organization within Temporal Displacement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Michele%20Sinico">Michele Sinico</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The psychological present has an actual extension. When a sequence of instantaneous stimuli falls in this short interval of time, observers perceive a compresence of events in succession and the temporal order depends on the qualitative relationships between the perceptual properties of the events. Two experiments were carried out to study the influence of perceptual grouping, with and without temporal displacement, on the duration of auditory sequences. The psychophysical method of adjustment was adopted. The first experiment investigated the effect of temporal displacement of a white noise on sequence duration. The second experiment investigated the effect of temporal displacement, along the pitch dimension, on temporal shortening of sequence. The results suggest that the temporal order of sounds, in the case of temporal displacement, is organized along the pitch dimension. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Time%20perception" title="Time perception">Time perception</a>, <a href="https://publications.waset.org/search?q=perceptual%20present" title=" perceptual present"> perceptual present</a>, <a href="https://publications.waset.org/search?q=temporal%0D%0Adisplacement" title=" temporal displacement"> temporal displacement</a>, <a href="https://publications.waset.org/search?q=gestalt%20laws%20of%20perceptual%20organization" title=" gestalt laws of perceptual organization"> gestalt laws of perceptual organization</a> </p> <a href="https://publications.waset.org/10008842/newperceptual-organization-within-temporal-displacement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008842/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008842/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008842/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008842/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008842/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008842/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008842/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008842/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008842/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008842/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008842.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">808</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">522</span> Parallel Joint Channel Coding and Cryptography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nata%C5%A1a%20%C5%BDivi%C4%87">Nataša Živić</a>, <a href="https://publications.waset.org/search?q=Christoph%20Ruland"> Christoph Ruland</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Method of Parallel Joint Channel Coding and Cryptography has been analyzed and simulated in this paper. The method is an extension of Soft Input Decryption with feedback, which is used for improvement of channel decoding of secured messages. Parallel Joint Channel Coding and Cryptography results in improved coding gain of channel decoding, which achieves more than 2 dB. Such results are an implication of a combination of receiver components and their interoperability. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Block%20length" title="Block length">Block length</a>, <a href="https://publications.waset.org/search?q=Coding%20gain" title=" Coding gain"> Coding gain</a>, <a href="https://publications.waset.org/search?q=Feedback" title=" Feedback"> Feedback</a>, <a href="https://publications.waset.org/search?q=L-values" title=" L-values"> L-values</a>, <a href="https://publications.waset.org/search?q=Parallel%20Joint%20Channel%20Coding%20and%20Cryptography" title=" Parallel Joint Channel Coding and Cryptography"> Parallel Joint Channel Coding and Cryptography</a>, <a href="https://publications.waset.org/search?q=Soft%20Input%0ADecryption." title=" Soft Input Decryption."> Soft Input Decryption.</a> </p> <a href="https://publications.waset.org/8203/parallel-joint-channel-coding-and-cryptography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8203/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8203/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8203/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8203/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8203/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8203/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8203/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8203/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8203/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8203/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8203.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1585</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">521</span> Enhanced Frame-based Video Coding to Support Content-based Functionalities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Prabhudev%20Hosur">Prabhudev Hosur</a>, <a href="https://publications.waset.org/search?q=Rolando%20Carrasco"> Rolando Carrasco</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents the enhanced frame-based video coding scheme. The input source video to the enhanced frame-based video encoder consists of a rectangular-size video and shapes of arbitrarily-shaped objects on video frames. The rectangular frame texture is encoded by the conventional frame-based coding technique and the video object-s shape is encoded using the contour-based vertex coding. It is possible to achieve several useful content-based functionalities by utilizing the shape information in the bitstream at the cost of a very small overhead to the bitrate.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Video%20coding" title="Video coding">Video coding</a>, <a href="https://publications.waset.org/search?q=content-based" title=" content-based"> content-based</a>, <a href="https://publications.waset.org/search?q=hyper%20video" title=" hyper video"> hyper video</a>, <a href="https://publications.waset.org/search?q=interactivity" title=" interactivity"> interactivity</a>, <a href="https://publications.waset.org/search?q=shape%20coding" title=" shape coding"> shape coding</a>, <a href="https://publications.waset.org/search?q=polygon." title=" polygon."> polygon.</a> </p> <a href="https://publications.waset.org/13244/enhanced-frame-based-video-coding-to-support-content-based-functionalities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13244/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13244/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13244/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13244/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13244/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13244/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13244/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13244/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13244/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13244/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13244.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1662</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">520</span> The Main Principles of Text-to-Speech Synthesis System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=K.R.%20Aida%E2%80%93Zade">K.R. Aida–Zade</a>, <a href="https://publications.waset.org/search?q=C.%20Ardil"> C. Ardil</a>, <a href="https://publications.waset.org/search?q=A.M.%20Sharifova"> A.M. Sharifova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, the main principles of text-to-speech synthesis system are presented. Associated problems which arise when developing speech synthesis system are described. Used approaches and their application in the speech synthesis systems for Azerbaijani language are shown.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=synthesis%20of%20Azerbaijani%20language" title="synthesis of Azerbaijani language">synthesis of Azerbaijani language</a>, <a href="https://publications.waset.org/search?q=morphemes" title=" morphemes"> morphemes</a>, <a href="https://publications.waset.org/search?q=phonemes" title="phonemes">phonemes</a>, <a href="https://publications.waset.org/search?q=sounds" title=" sounds"> sounds</a>, <a href="https://publications.waset.org/search?q=sentence" title=" sentence"> sentence</a>, <a href="https://publications.waset.org/search?q=speech%20synthesizer" title=" speech synthesizer"> speech synthesizer</a>, <a href="https://publications.waset.org/search?q=intonation" title=" intonation"> intonation</a>, <a href="https://publications.waset.org/search?q=accent" title=" accent"> accent</a>, <a href="https://publications.waset.org/search?q=pronunciation." title="pronunciation.">pronunciation.</a> </p> <a href="https://publications.waset.org/8303/the-main-principles-of-text-to-speech-synthesis-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8303/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8303/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8303/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8303/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8303/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8303/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8303/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8303/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8303/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8303/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8303.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">5652</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">519</span> TeleMe Speech Booster: Web-Based Speech Therapy and Training Program for Children with Articulation Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=C.%20Treerattanaphan">C. Treerattanaphan</a>, <a href="https://publications.waset.org/search?q=P.%20Boonpramuk"> P. Boonpramuk</a>, <a href="https://publications.waset.org/search?q=P.%20Singla"> P. Singla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Frequent, continuous speech training has proven to be a necessary part of a successful speech therapy process, but constraints of traveling time and employment dispensation become key obstacles especially for individuals living in remote areas or for dependent children who have working parents. In order to ameliorate speech difficulties with ample guidance from speech therapists, a website has been developed that supports speech therapy and training for people with articulation disorders in the standard Thai language. This web-based program has the ability to record speech training exercises for each speech trainee. The records will be stored in a database for the speech therapist to investigate, evaluate, compare and keep track of all trainees’ progress in detail. Speech trainees can request live discussions via video conference call when needed. Communication through this web-based program facilitates and reduces training time in comparison to walk-in training or appointments. This type of training also allows people with articulation disorders to practice speech lessons whenever or wherever is convenient for them, which can lead to a more regular training processes.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Web-Based%20Remote%20Training%20Program" title="Web-Based Remote Training Program">Web-Based Remote Training Program</a>, <a href="https://publications.waset.org/search?q=Thai%20Speech%0D%0ATherapy" title=" Thai Speech Therapy"> Thai Speech Therapy</a>, <a href="https://publications.waset.org/search?q=Articulation%20Disorders." title=" Articulation Disorders."> Articulation Disorders.</a> </p> <a href="https://publications.waset.org/9999541/teleme-speech-booster-web-based-speech-therapy-and-training-program-for-children-with-articulation-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999541/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999541/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999541/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999541/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999541/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999541/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999541/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999541/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999541/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999541/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999541.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1859</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=18">18</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=19">19</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=perceptual%20speech%20coding&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>