CINXE.COM
Search results for: cepstral analysis
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: cepstral analysis</title> <meta name="description" content="Search results for: cepstral analysis"> <meta name="keywords" content="cepstral analysis"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="cepstral analysis" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="cepstral analysis"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 8710</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: cepstral analysis</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8710</span> Evaluation of Features Extraction Algorithms for a Real-Time Isolated Word Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tomyslav%20Sledevi%C4%8D">Tomyslav Sledevič</a>, <a href="https://publications.waset.org/search?q=Art%C5%ABras%20Serackis"> Artūras Serackis</a>, <a href="https://publications.waset.org/search?q=Gintautas%20Tamulevi%C4%8Dius"> Gintautas Tamulevičius</a>, <a href="https://publications.waset.org/search?q=Dalius%20Navakauskas"> Dalius Navakauskas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Paper presents an comparative evaluation of features extraction algorithm for a real-time isolated word recognition system based on FPGA. The Mel-frequency cepstral, linear frequency cepstral, linear predictive and their cepstral coefficients were implemented in hardware/software design. The proposed system was investigated in speaker dependent mode for 100 different Lithuanian words. The robustness of features extraction algorithms was tested recognizing the speech records at different signal to noise rates. The experiments on clean records show highest accuracy for Mel-frequency cepstral and linear frequency cepstral coefficients. For records with 15 dB signal to noise rate the linear predictive cepstral coefficients gives best result. The hard and soft part of the system is clocked on 50 MHz and 100 MHz accordingly. For the classification purpose the pipelined dynamic time warping core was implemented. The proposed word recognition system satisfy the real-time requirements and is suitable for applications in embedded systems.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Isolated%20word%20recognition" title="Isolated word recognition">Isolated word recognition</a>, <a href="https://publications.waset.org/search?q=features%20extraction" title=" features extraction"> features extraction</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=LFCC" title=" LFCC"> LFCC</a>, <a href="https://publications.waset.org/search?q=LPCC" title=" LPCC"> LPCC</a>, <a href="https://publications.waset.org/search?q=LPC" title=" LPC"> LPC</a>, <a href="https://publications.waset.org/search?q=FPGA" title=" FPGA"> FPGA</a>, <a href="https://publications.waset.org/search?q=DTW." title=" DTW."> DTW.</a> </p> <a href="https://publications.waset.org/9996651/evaluation-of-features-extraction-algorithms-for-a-real-time-isolated-word-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9996651/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9996651/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9996651/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9996651/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9996651/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9996651/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9996651/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9996651/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9996651/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9996651/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9996651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3540</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8709</span> Robust Features for Impulsive Noisy Speech Recognition Using Relative Spectral Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hajer%20Rahali">Hajer Rahali</a>, <a href="https://publications.waset.org/search?q=Zied%20Hajaiej"> Zied Hajaiej</a>, <a href="https://publications.waset.org/search?q=Noureddine%20Ellouze"> Noureddine Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The goal of speech parameterization is to extract the relevant information about what is being spoken from the audio signal. In speech recognition systems Mel-Frequency Cepstral Coefficients (MFCC) and Relative Spectral Mel-Frequency Cepstral Coefficients (RASTA-MFCC) are the two main techniques used. It will be shown in this paper that it presents some modifications to the original MFCC method. In our work the effectiveness of proposed changes to MFCC called Modified Function Cepstral Coefficients (MODFCC) were tested and compared against the original MFCC and RASTA-MFCC features. The prosodic features such as jitter and shimmer are added to baseline spectral features. The above-mentioned techniques were tested with impulsive signals under various noisy conditions within AURORA databases.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Auditory%20filter" title="Auditory filter">Auditory filter</a>, <a href="https://publications.waset.org/search?q=impulsive%20noise" title=" impulsive noise"> impulsive noise</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=prosodic%20features" title=" prosodic features"> prosodic features</a>, <a href="https://publications.waset.org/search?q=RASTA%20filter." title=" RASTA filter."> RASTA filter.</a> </p> <a href="https://publications.waset.org/9999327/robust-features-for-impulsive-noisy-speech-recognition-using-relative-spectral-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999327/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999327/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999327/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999327/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999327/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999327/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999327/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999327/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999327/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999327/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999327.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2323</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8708</span> Spectral Analysis of Speech: A New Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Neeta%20Awasthy">Neeta Awasthy</a>, <a href="https://publications.waset.org/search?q=J.P.Saini"> J.P.Saini</a>, <a href="https://publications.waset.org/search?q=D.S.Chauhan"> D.S.Chauhan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> ICA which is generally used for blind source separation problem has been tested for feature extraction in Speech recognition system to replace the phoneme based approach of MFCC. Applying the Cepstral coefficients generated to ICA as preprocessing has developed a new signal processing approach. This gives much better results against MFCC and ICA separately, both for word and speaker recognition. The mixing matrix A is different before and after MFCC as expected. As Mel is a nonlinear scale. However, cepstrals generated from Linear Predictive Coefficient being independent prove to be the right candidate for ICA. Matlab is the tool used for all comparisons. The database used is samples of ISOLET. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Cepstral%20Coefficient" title="Cepstral Coefficient">Cepstral Coefficient</a>, <a href="https://publications.waset.org/search?q=Distance%20measures" title=" Distance measures"> Distance measures</a>, <a href="https://publications.waset.org/search?q=Independent%0AComponent%20Analysis" title=" Independent Component Analysis"> Independent Component Analysis</a>, <a href="https://publications.waset.org/search?q=Linear%20Predictive%20Coefficients." title=" Linear Predictive Coefficients."> Linear Predictive Coefficients.</a> </p> <a href="https://publications.waset.org/2935/spectral-analysis-of-speech-a-new-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2935/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2935/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2935/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2935/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2935/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2935/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2935/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2935/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2935/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2935/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2935.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1957</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8707</span> Comparison of MFCC and Cepstral Coefficients as a Feature Set for PCG Biometric Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Justin%20Leo%20Cheang%20Loong">Justin Leo Cheang Loong</a>, <a href="https://publications.waset.org/search?q=Khazaimatol%20S%20Subari"> Khazaimatol S Subari</a>, <a href="https://publications.waset.org/search?q=Muhammad%20Kamil%20Abdullah"> Muhammad Kamil Abdullah</a>, <a href="https://publications.waset.org/search?q=Nurul%20Nadia%20Ahmad"> Nurul Nadia Ahmad</a>, <a href="https://publications.waset.org/search?q=RosliBesar"> RosliBesar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Heart sound is an acoustic signal and many techniques used nowadays for human recognition tasks borrow speech recognition techniques. One popular choice for feature extraction of accoustic signals is the Mel Frequency Cepstral Coefficients (MFCC) which maps the signal onto a non-linear Mel-Scale that mimics the human hearing. However the Mel-Scale is almost linear in the frequency region of heart sounds and thus should produce similar results with the standard cepstral coefficients (CC). In this paper, MFCC is investigated to see if it produces superior results for PCG based human identification system compared to CC. Results show that the MFCC system is still superior to CC despite linear filter-banks in the lower frequency range, giving up to 95% correct recognition rate for MFCC and 90% for CC. Further experiments show that the high recognition rate is due to the implementation of filter-banks and not from Mel-Scaling. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometric" title="Biometric">Biometric</a>, <a href="https://publications.waset.org/search?q=Phonocardiogram" title=" Phonocardiogram"> Phonocardiogram</a>, <a href="https://publications.waset.org/search?q=Cepstral%20Coefficients" title=" Cepstral Coefficients"> Cepstral Coefficients</a>, <a href="https://publications.waset.org/search?q=Mel%20Frequency" title="Mel Frequency">Mel Frequency</a> </p> <a href="https://publications.waset.org/8872/comparison-of-mfcc-and-cepstral-coefficients-as-a-feature-set-for-pcg-biometric-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8872/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8872/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8872/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8872/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8872/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8872/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8872/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8872/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8872/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8872/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3552</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8706</span> A Supervised Text-Independent Speaker Recognition Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tudor%20Barbu">Tudor Barbu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>We provide a supervised speech-independent voice recognition technique in this paper. In the feature extraction stage we propose a mel-cepstral based approach. Our feature vector classification method uses a special nonlinear metric, derived from the Hausdorff distance for sets, and a minimum mean distance classifier.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Text-independent%20speaker%20recognition" title="Text-independent speaker recognition">Text-independent speaker recognition</a>, <a href="https://publications.waset.org/search?q=mel%20cepstral%0D%0Aanalysis" title=" mel cepstral analysis"> mel cepstral analysis</a>, <a href="https://publications.waset.org/search?q=speech%20feature%20vector" title=" speech feature vector"> speech feature vector</a>, <a href="https://publications.waset.org/search?q=Hausdorff-based%20metric" title=" Hausdorff-based metric"> Hausdorff-based metric</a>, <a href="https://publications.waset.org/search?q=supervised%0D%0Aclassification." title=" supervised classification."> supervised classification.</a> </p> <a href="https://publications.waset.org/4291/a-supervised-text-independent-speaker-recognition-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4291/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4291/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4291/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4291/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4291/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4291/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4291/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4291/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4291/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4291/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4291.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1829</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8705</span> Investigation of Combined use of MFCC and LPC Features in Speech Recognition Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=%D0%9A.%20R.%20Aida%E2%80%93Zade">К. R. Aida–Zade</a>, <a href="https://publications.waset.org/search?q=C.%20Ardil"> C. Ardil</a>, <a href="https://publications.waset.org/search?q=S.%20S.%20Rustamov"> S. S. Rustamov </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Statement of the automatic speech recognition problem, the assignment of speech recognition and the application fields are shown in the paper. At the same time as Azerbaijan speech, the establishment principles of speech recognition system and the problems arising in the system are investigated. The computing algorithms of speech features, being the main part of speech recognition system, are analyzed. From this point of view, the determination algorithms of Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC) coefficients expressing the basic speech features are developed. Combined use of cepstrals of MFCC and LPC in speech recognition system is suggested to improve the reliability of speech recognition system. To this end, the recognition system is divided into MFCC and LPC-based recognition subsystems. The training and recognition processes are realized in both subsystems separately, and recognition system gets the decision being the same results of each subsystems. This results in decrease of error rate during recognition. The training and recognition processes are realized by artificial neural networks in the automatic speech recognition system. The neural networks are trained by the conjugate gradient method. In the paper the problems observed by the number of speech features at training the neural networks of MFCC and LPC-based speech recognition subsystems are investigated. The variety of results of neural networks trained from different initial points in training process is analyzed. Methodology of combined use of neural networks trained from different initial points in speech recognition system is suggested to improve the reliability of recognition system and increase the recognition quality, and obtained practical results are shown. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20recognition" title="Speech recognition">Speech recognition</a>, <a href="https://publications.waset.org/search?q=cepstral%20analysis" title=" cepstral analysis"> cepstral analysis</a>, <a href="https://publications.waset.org/search?q=Voice%0D%0Aactivation%20detection%20algorithm" title=" Voice activation detection algorithm"> Voice activation detection algorithm</a>, <a href="https://publications.waset.org/search?q=Mel%20Frequency%20Cepstral%0D%0ACoefficients" title=" Mel Frequency Cepstral Coefficients"> Mel Frequency Cepstral Coefficients</a>, <a href="https://publications.waset.org/search?q=features%20of%20speech" title=" features of speech"> features of speech</a>, <a href="https://publications.waset.org/search?q=Cepstral%20Mean%20Subtraction" title=" Cepstral Mean Subtraction"> Cepstral Mean Subtraction</a>, <a href="https://publications.waset.org/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/search?q=Linear%20Predictive%20Coding." title=" Linear Predictive Coding."> Linear Predictive Coding.</a> </p> <a href="https://publications.waset.org/10008323/investigation-of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008323/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008323/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008323/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008323/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008323/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008323/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008323/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008323/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008323/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008323/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">913</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8704</span> Long-Term Simulation of Digestive Sound Signals by CEPSTRAL Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Einalou%20Z.">Einalou Z.</a>, <a href="https://publications.waset.org/search?q=Najafi%20Z."> Najafi Z.</a>, <a href="https://publications.waset.org/search?q=Maghooli%20K.%20Zandi%20Y"> Maghooli K. Zandi Y</a>, <a href="https://publications.waset.org/search?q=Sheibeigi%20A"> Sheibeigi A</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this study, an investigation over digestive diseases has been done in which the sound acts as a detector medium. Pursue to the preprocessing the extracted signal in cepstrum domain is registered. After classification of digestive diseases, the system selects random samples based on their features and generates the interest nonstationary, long-term signals via inverse transform in cepstral domain which is presented in digital and sonic form as the output. This structure is updatable or on the other word, by receiving a new signal the corresponding disease classification is updated in the feature domain.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Cepstrum" title="Cepstrum">Cepstrum</a>, <a href="https://publications.waset.org/search?q=databank" title=" databank"> databank</a>, <a href="https://publications.waset.org/search?q=digestive%20disease" title=" digestive disease"> digestive disease</a>, <a href="https://publications.waset.org/search?q=acousticsignal." title=" acousticsignal."> acousticsignal.</a> </p> <a href="https://publications.waset.org/8524/long-term-simulation-of-digestive-sound-signals-by-cepstral-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8524/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8524/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8524/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8524/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8524/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8524/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8524/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8524/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8524/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8524/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8524.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1556</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8703</span> Accuracy of Divergence Measures for Detection of Abrupt Changes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=P.%20Bergl">P. Bergl</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Numerous divergence measures (spectral distance, cepstral distance, difference of the cepstral coefficients, Kullback-Leibler divergence, distance given by the General Likelihood Ratio, distance defined by the Recursive Bayesian Changepoint Detector and the Mahalanobis measure) are compared in this study. The measures are used for detection of abrupt spectral changes in synthetic AR signals via the sliding window algorithm. Two experiments are performed; the first is focused on detection of single boundary while the second concentrates on detection of a couple of boundaries. Accuracy of detection is judged for each method; the measures are compared according to results of both experiments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Abrupt%20changes%20detection" title="Abrupt changes detection">Abrupt changes detection</a>, <a href="https://publications.waset.org/search?q=autoregressive%20model" title=" autoregressive model"> autoregressive model</a>, <a href="https://publications.waset.org/search?q=divergence%20measure." title=" divergence measure."> divergence measure.</a> </p> <a href="https://publications.waset.org/1730/accuracy-of-divergence-measures-for-detection-of-abrupt-changes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1730/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1730/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1730/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1730/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1730/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1730/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1730/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1730/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1730/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1730/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1730.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1449</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8702</span> The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fawaz%20S.%20Al-Anzi">Fawaz S. Al-Anzi</a>, <a href="https://publications.waset.org/search?q=Dia%20AbuZeina"> Dia AbuZeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20recognition" title="Speech recognition">Speech recognition</a>, <a href="https://publications.waset.org/search?q=acoustic%20features" title=" acoustic features"> acoustic features</a>, <a href="https://publications.waset.org/search?q=Mel%20Frequency%20Cepstral%20Coefficients." title=" Mel Frequency Cepstral Coefficients."> Mel Frequency Cepstral Coefficients.</a> </p> <a href="https://publications.waset.org/10008047/the-capacity-of-mel-frequency-cepstral-coefficients-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008047/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008047/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008047/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008047/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008047/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008047/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008047/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008047/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008047/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008047/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1973</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8701</span> Through Biometric Card in Romania: Person Identification by Face, Fingerprint and Voice Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hariton%20N.%20Costin">Hariton N. Costin</a>, <a href="https://publications.waset.org/search?q=Iulian%20Ciocoiu"> Iulian Ciocoiu</a>, <a href="https://publications.waset.org/search?q=Tudor%20Barbu"> Tudor Barbu</a>, <a href="https://publications.waset.org/search?q=Cristian%20Rotariu"> Cristian Rotariu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper three different approaches for person verification and identification, i.e. by means of fingerprints, face and voice recognition, are studied. Face recognition uses parts-based representation methods and a manifold learning approach. The assessment criterion is recognition accuracy. The techniques under investigation are: a) Local Non-negative Matrix Factorization (LNMF); b) Independent Components Analysis (ICA); c) NMF with sparse constraints (NMFsc); d) Locality Preserving Projections (Laplacianfaces). Fingerprint detection was approached by classical minutiae (small graphical patterns) matching through image segmentation by using a structural approach and a neural network as decision block. As to voice / speaker recognition, melodic cepstral and delta delta mel cepstral analysis were used as main methods, in order to construct a supervised speaker-dependent voice recognition system. The final decision (e.g. “accept-reject" for a verification task) is taken by using a majority voting technique applied to the three biometrics. The preliminary results, obtained for medium databases of fingerprints, faces and voice recordings, indicate the feasibility of our study and an overall recognition precision (about 92%) permitting the utilization of our system for a future complex biometric card. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometry" title="Biometry">Biometry</a>, <a href="https://publications.waset.org/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/search?q=speech%20analysis." title=" speech analysis."> speech analysis.</a> </p> <a href="https://publications.waset.org/8832/through-biometric-card-in-romania-person-identification-by-face-fingerprint-and-voice-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8832/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8832/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8832/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8832/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8832/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8832/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8832/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8832/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8832/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8832/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8832.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1944</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8700</span> Terrain Classification for Ground Robots Based on Acoustic Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Bernd%20Kiefer">Bernd Kiefer</a>, <a href="https://publications.waset.org/search?q=Abraham%20Gebru%20Tesfay"> Abraham Gebru Tesfay</a>, <a href="https://publications.waset.org/search?q=Dietrich%20Klakow"> Dietrich Klakow</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Terrain%20classification" title="Terrain classification">Terrain classification</a>, <a href="https://publications.waset.org/search?q=acoustic%20features" title=" acoustic features"> acoustic features</a>, <a href="https://publications.waset.org/search?q=autonomous%0D%0Arobots" title=" autonomous robots"> autonomous robots</a>, <a href="https://publications.waset.org/search?q=feature%20extraction." title=" feature extraction."> feature extraction.</a> </p> <a href="https://publications.waset.org/10007196/terrain-classification-for-ground-robots-based-on-acoustic-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10007196/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10007196/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10007196/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10007196/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10007196/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10007196/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10007196/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10007196/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10007196/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10007196/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10007196.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1132</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8699</span> Trispectral Analysis of Voiced Sounds Defective Audition and Tracheotomisian Cases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=H.%20Maalem">H. Maalem</a>, <a href="https://publications.waset.org/search?q=F.%20Marir"> F. Marir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the cepstral and trispectral analysis of a speech signal produced by normal men, men with defective audition (deaf, deep deaf) and others affected by tracheotomy, the trispectral analysis based on parametric methods (Autoregressive AR) using the fourth order cumulant. These analyses are used to detect and compare the pitches and the formants of corresponding voiced sounds (vowel \a\, \i\ and \u\). The first results appear promising, since- it seems after several experimentsthere is no deformation of the spectrum as one could have supposed it at the beginning, however these pathologies influenced the two characteristics: The defective audition influences to the formants contrary to the tracheotomy, which influences the fundamental frequency (pitch). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Cepstrum" title="Cepstrum">Cepstrum</a>, <a href="https://publications.waset.org/search?q=cumulant" title=" cumulant"> cumulant</a>, <a href="https://publications.waset.org/search?q=defective%20audition" title=" defective audition"> defective audition</a>, <a href="https://publications.waset.org/search?q=tracheotomisy" title=" tracheotomisy"> tracheotomisy</a>, <a href="https://publications.waset.org/search?q=trispectrum." title=" trispectrum."> trispectrum.</a> </p> <a href="https://publications.waset.org/5303/trispectral-analysis-of-voiced-sounds-defective-audition-and-tracheotomisian-cases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5303/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5303/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5303/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5303/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5303/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5303/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5303/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5303/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5303/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5303/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5303.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1407</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8698</span> Analysis of Combined Use of NN and MFCC for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Safdar%20Tanweer">Safdar Tanweer</a>, <a href="https://publications.waset.org/search?q=Abdul%20Mobin"> Abdul Mobin</a>, <a href="https://publications.waset.org/search?q=Afshar%20Alam"> Afshar Alam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The performance and analysis of speech recognition system is illustrated in this paper. An approach to recognize the English word corresponding to digit (0-9) spoken by 2 different speakers is captured in noise free environment. For feature extraction, speech Mel frequency cepstral coefficients (MFCC) has been used which gives a set of feature vectors from recorded speech samples. Neural network model is used to enhance the recognition performance. Feed forward neural network with back propagation algorithm model is used. However other speech recognition techniques such as HMM, DTW exist. All experiments are carried out on Matlab.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20Recognition" title="Speech Recognition">Speech Recognition</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=Neural%20Network" title=" Neural Network"> Neural Network</a>, <a href="https://publications.waset.org/search?q=classifier." title=" classifier. "> classifier. </a> </p> <a href="https://publications.waset.org/10000797/analysis-of-combined-use-of-nn-and-mfcc-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000797/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000797/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000797/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000797/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000797/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000797/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000797/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000797/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000797/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000797/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000797.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3268</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8697</span> Improved Text-Independent Speaker Identification using Fused MFCC and IMFCC Feature Sets based on Gaussian Filter</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sandipan%20Chakroborty">Sandipan Chakroborty</a>, <a href="https://publications.waset.org/search?q=Goutam%20Saha"> Goutam Saha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for speech related applications. On a recent contribution by authors, it has been shown that the Inverted Mel- Frequency Cepstral Coefficients (IMFCC) is useful feature set for SI, which contains complementary information present in high frequency region. This paper introduces the Gaussian shaped filter (GF) while calculating MFCC and IMFCC in place of typical triangular shaped bins. The objective is to introduce a higher amount of correlation between subband outputs. The performances of both MFCC & IMFCC improve with GF over conventional triangular filter (TF) based implementation, individually as well as in combination. With GMM as speaker modeling paradigm, the performances of proposed GF based MFCC and IMFCC in individual and fused mode have been verified in two standard databases YOHO, (Microphone Speech) and POLYCOST (Telephone Speech) each of which has more than 130 speakers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gaussian%20Filter" title="Gaussian Filter">Gaussian Filter</a>, <a href="https://publications.waset.org/search?q=Triangular%20Filter" title=" Triangular Filter"> Triangular Filter</a>, <a href="https://publications.waset.org/search?q=Subbands" title=" Subbands"> Subbands</a>, <a href="https://publications.waset.org/search?q=Correlation" title="Correlation">Correlation</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=IMFCC" title=" IMFCC"> IMFCC</a>, <a href="https://publications.waset.org/search?q=GMM." title=" GMM."> GMM.</a> </p> <a href="https://publications.waset.org/9849/improved-text-independent-speaker-identification-using-fused-mfcc-and-imfcc-feature-sets-based-on-gaussian-filter" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9849/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9849/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9849/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9849/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9849/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9849/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9849/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9849/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9849/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9849/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2449</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8696</span> Comparison of Parameterization Methods in Recognizing Spoken Arabic Digits</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ali%20Ganoun">Ali Ganoun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper proposes evaluation of sound parameterization methods in recognizing some spoken Arabic words, namely digits from zero to nine. Each isolated spoken word is represented by a single template based on a specific recognition feature, and the recognition is based on the Euclidean distance from those templates. The performance analysis of recognition is based on four parameterization features: the Burg Spectrum Analysis, the Walsh Spectrum Analysis, the Thomson Multitaper Spectrum Analysis and the Mel Frequency Cepstral Coefficients (MFCC) features. The main aim of this paper was to compare, analyze, and discuss the outcomes of spoken Arabic digits recognition systems based on the selected recognition features. The results acqired confirm that the use of MFCC features is a very promising method in recognizing Spoken Arabic digits.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speech%20Recognition" title="Speech Recognition">Speech Recognition</a>, <a href="https://publications.waset.org/search?q=Spectrum%20Analysis" title=" Spectrum Analysis"> Spectrum Analysis</a>, <a href="https://publications.waset.org/search?q=Burg%20Spectrum" title=" Burg Spectrum"> Burg Spectrum</a>, <a href="https://publications.waset.org/search?q=Walsh%20Spectrum%20Analysis" title=" Walsh Spectrum Analysis"> Walsh Spectrum Analysis</a>, <a href="https://publications.waset.org/search?q=Thomson%20Multitaper%20Spectrum" title=" Thomson Multitaper Spectrum"> Thomson Multitaper Spectrum</a>, <a href="https://publications.waset.org/search?q=MFCC." title=" MFCC."> MFCC.</a> </p> <a href="https://publications.waset.org/5922/comparison-of-parameterization-methods-in-recognizing-spoken-arabic-digits" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5922/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5922/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5922/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5922/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5922/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5922/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5922/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5922/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5922/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5922/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5922.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1593</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8695</span> An Advanced Method for Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Meysam%20Mohamad%20pour">Meysam Mohamad pour</a>, <a href="https://publications.waset.org/search?q=Fardad%20Farokhi"> Fardad Farokhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper in consideration of each available techniques deficiencies for speech recognition, an advanced method is presented that-s able to classify speech signals with the high accuracy (98%) at the minimum time. In the presented method, first, the recorded signal is preprocessed that this section includes denoising with Mels Frequency Cepstral Analysis and feature extraction using discrete wavelet transform (DWT) coefficients; Then these features are fed to Multilayer Perceptron (MLP) network for classification. Finally, after training of neural network effective features are selected with UTA algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Multilayer%20perceptron%20%28MLP%29%20neural%20network" title="Multilayer perceptron (MLP) neural network">Multilayer perceptron (MLP) neural network</a>, <a href="https://publications.waset.org/search?q=Discrete%20Wavelet%20Transform%20%28DWT%29" title=" Discrete Wavelet Transform (DWT) "> Discrete Wavelet Transform (DWT) </a>, <a href="https://publications.waset.org/search?q=Mels%20Scale%20Frequency%20Filter" title=" Mels Scale Frequency Filter "> Mels Scale Frequency Filter </a>, <a href="https://publications.waset.org/search?q=UTA%20algorithm." title="UTA algorithm.">UTA algorithm.</a> </p> <a href="https://publications.waset.org/4571/an-advanced-method-for-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4571/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4571/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4571/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4571/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4571/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4571/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4571/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4571/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4571/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4571/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4571.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2366</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8694</span> Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ahmed%20Kamil%20Hasan%20Al-Ali">Ahmed Kamil Hasan Al-Ali</a>, <a href="https://publications.waset.org/search?q=Bouchra%20Senadji"> Bouchra Senadji</a>, <a href="https://publications.waset.org/search?q=Ganesh%20Naik"> Ganesh Naik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping-ICA achieves a reduction in equal error rate about (48.22%, 44.66%, and 50.07%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Noisy%20forensic%20speaker%20verification" title="Noisy forensic speaker verification">Noisy forensic speaker verification</a>, <a href="https://publications.waset.org/search?q=ICA%20algorithm" title=" ICA algorithm"> ICA algorithm</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=MFCC%20feature%20warping." title=" MFCC feature warping."> MFCC feature warping.</a> </p> <a href="https://publications.waset.org/10006970/forensic-speaker-verification-in-noisy-environmental-by-enhancing-the-speech-signal-using-ica-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10006970/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10006970/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10006970/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10006970/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10006970/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10006970/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10006970/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10006970/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10006970/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10006970/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10006970.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">990</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8693</span> Puff Noise Detection and Cancellation for Robust Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sangjun%20Park">Sangjun Park</a>, <a href="https://publications.waset.org/search?q=Jungpyo%20Hong"> Jungpyo Hong</a>, <a href="https://publications.waset.org/search?q=Byung-Ok%20Kang"> Byung-Ok Kang</a>, <a href="https://publications.waset.org/search?q=Yun-keun%20Lee"> Yun-keun Lee</a>, <a href="https://publications.waset.org/search?q=Minsoo%20Hahn"> Minsoo Hahn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an algorithm for detecting and attenuating puff noises frequently generated under the mobile environment is proposed. As a baseline system, puff detection system is designed based on Gaussian Mixture Model (GMM), and 39th Mel Frequency Cepstral Coefficient (MFCC) is extracted as feature parameters. To improve the detection performance, effective acoustic features for puff detection are proposed. In addition, detected puff intervals are attenuated by high-pass filtering. The speech recognition rate was measured for evaluation and confusion matrix and ROC curve are used to confirm the validity of the proposed system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gaussian%20mixture%20model" title="Gaussian mixture model">Gaussian mixture model</a>, <a href="https://publications.waset.org/search?q=puff%20detection%20and%0Acancellation" title=" puff detection and cancellation"> puff detection and cancellation</a>, <a href="https://publications.waset.org/search?q=speech%20enhancement." title=" speech enhancement."> speech enhancement.</a> </p> <a href="https://publications.waset.org/10021/puff-noise-detection-and-cancellation-for-robust-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10021/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10021/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10021/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10021/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10021/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10021/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10021/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10021/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10021/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10021/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10021.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2233</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8692</span> Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Xiao%20Chen">Xiao Chen</a>, <a href="https://publications.waset.org/search?q=Xiaoying%20Kong"> Xiaoying Kong</a>, <a href="https://publications.waset.org/search?q=Min%20Xu"> Min Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Vehicle%20classification" title="Vehicle classification">Vehicle classification</a>, <a href="https://publications.waset.org/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/search?q=road%20traffic%20model" title=" road traffic model"> road traffic model</a>, <a href="https://publications.waset.org/search?q=magnetic%20sensing." title=" magnetic sensing. "> magnetic sensing. </a> </p> <a href="https://publications.waset.org/10008804/road-vehicle-recognition-using-magnetic-sensing-feature-extraction-and-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008804/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008804/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008804/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008804/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008804/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008804/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008804/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008804/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008804/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008804/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008804.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1401</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8691</span> Voice Command Recognition System Based on MFCC and VQ Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mahdi%20Shaneh">Mahdi Shaneh</a>, <a href="https://publications.waset.org/search?q=Azizollah%20Taheri"> Azizollah Taheri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this project is to design a system to recognition voice commands. Most of voice recognition systems contain two main modules as follow “feature extraction" and “feature matching". In this project, MFCC algorithm is used to simulate feature extraction module. Using this algorithm, the cepstral coefficients are calculated on mel frequency scale. VQ (vector quantization) method will be used for reduction of amount of data to decrease computation time. In the feature matching stage Euclidean distance is applied as similarity criterion. Because of high accuracy of used algorithms, the accuracy of this voice command system is high. Using these algorithms, by at least 5 times repetition for each command, in a single training session, and then twice in each testing session zero error rate in recognition of commands is achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=MFCC" title="MFCC">MFCC</a>, <a href="https://publications.waset.org/search?q=Vector%20quantization" title=" Vector quantization"> Vector quantization</a>, <a href="https://publications.waset.org/search?q=Vocal%20tract" title=" Vocal tract"> Vocal tract</a>, <a href="https://publications.waset.org/search?q=Voicecommand." title=" Voicecommand."> Voicecommand.</a> </p> <a href="https://publications.waset.org/4967/voice-command-recognition-system-based-on-mfcc-and-vq-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4967/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4967/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4967/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4967/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4967/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4967/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4967/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4967/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4967/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4967/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3157</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8690</span> Efficient DTW-Based Speech Recognition System for Isolated Words of Arabic Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Khalid%20A.%20Darabkh">Khalid A. Darabkh</a>, <a href="https://publications.waset.org/search?q=Ala%20F.%20Khalifeh"> Ala F. Khalifeh</a>, <a href="https://publications.waset.org/search?q=Baraa%20A.%20Bathech"> Baraa A. Bathech</a>, <a href="https://publications.waset.org/search?q=Saed%20W.%20Sabah"> Saed W. Sabah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Despite the fact that Arabic language is currently one of the most common languages worldwide, there has been only a little research on Arabic speech recognition relative to other languages such as English and Japanese. Generally, digital speech processing and voice recognition algorithms are of special importance for designing efficient, accurate, as well as fast automatic speech recognition systems. However, the speech recognition process carried out in this paper is divided into three stages as follows: firstly, the signal is preprocessed to reduce noise effects. After that, the signal is digitized and hearingized. Consequently, the voice activity regions are segmented using voice activity detection (VAD) algorithm. Secondly, features are extracted from the speech signal using Mel-frequency cepstral coefficients (MFCC) algorithm. Moreover, delta and acceleration (delta-delta) coefficients have been added for the reason of improving the recognition accuracy. Finally, each test word-s features are compared to the training database using dynamic time warping (DTW) algorithm. Utilizing the best set up made for all affected parameters to the aforementioned techniques, the proposed system achieved a recognition rate of about 98.5% which outperformed other HMM and ANN-based approaches available in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Arabic%20speech%20recognition" title="Arabic speech recognition">Arabic speech recognition</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=DTW" title=" DTW"> DTW</a>, <a href="https://publications.waset.org/search?q=VAD." title=" VAD."> VAD.</a> </p> <a href="https://publications.waset.org/9982/efficient-dtw-based-speech-recognition-system-for-isolated-words-of-arabic-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9982/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9982/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9982/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9982/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9982/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9982/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9982/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9982/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9982/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9982/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">4075</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8689</span> Speaker Identification Using Admissible Wavelet Packet Based Decomposition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mangesh%20S.%20Deshpande">Mangesh S. Deshpande</a>, <a href="https://publications.waset.org/search?q=Raghunath%20S.%20Holambe"> Raghunath S. Holambe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mel Frequency Cepstral Coefficient (MFCC) features are widely used as acoustic features for speech recognition as well as speaker recognition. In MFCC feature representation, the Mel frequency scale is used to get a high resolution in low frequency region, and a low resolution in high frequency region. This kind of processing is good for obtaining stable phonetic information, but not suitable for speaker features that are located in high frequency regions. The speaker individual information, which is non-uniformly distributed in the high frequencies, is equally important for speaker recognition. Based on this fact we proposed an admissible wavelet packet based filter structure for speaker identification. Multiresolution capabilities of wavelet packet transform are used to derive the new features. The proposed scheme differs from previous wavelet based works, mainly in designing the filter structure. Unlike others, the proposed filter structure does not follow Mel scale. The closed-set speaker identification experiments performed on the TIMIT database shows improved identification performance compared to other commonly used Mel scale based filter structures using wavelets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speaker%20identification" title="Speaker identification">Speaker identification</a>, <a href="https://publications.waset.org/search?q=Wavelet%20transform" title=" Wavelet transform"> Wavelet transform</a>, <a href="https://publications.waset.org/search?q=Feature%20extraction" title=" Feature extraction"> Feature extraction</a>, <a href="https://publications.waset.org/search?q=MFCC" title="MFCC">MFCC</a>, <a href="https://publications.waset.org/search?q=GMM." title=" GMM."> GMM.</a> </p> <a href="https://publications.waset.org/3764/speaker-identification-using-admissible-wavelet-packet-based-decomposition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3764/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3764/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3764/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3764/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3764/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3764/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3764/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3764/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3764/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3764/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3764.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1982</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8688</span> Orchestra/Percussion Classification Algorithm for United Speech Audio Coding System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yueming%20Wang">Yueming Wang</a>, <a href="https://publications.waset.org/search?q=Rendong%20Ying"> Rendong Ying</a>, <a href="https://publications.waset.org/search?q=Sumxin%20Jiang"> Sumxin Jiang</a>, <a href="https://publications.waset.org/search?q=Peilin%20Liu"> Peilin Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Unified Speech Audio Coding (USAC), the latest MPEG standardization for unified speech and audio coding, uses a speech/audio classification algorithm to distinguish speech and audio segments of the input signal. The quality of the recovered audio can be increased by well-designed orchestra/percussion classification and subsequent processing. However, owing to the shortcoming of the system, introducing an orchestra/percussion classification and modifying subsequent processing can enormously increase the quality of the recovered audio. This paper proposes an orchestra/percussion classification algorithm for the USAC system which only extracts 3 scales of Mel-Frequency Cepstral Coefficients (MFCCs) rather than traditional 13 scales of MFCCs and use Iterative Dichotomiser 3 (ID3) Decision Tree rather than other complex learning method, thus the proposed algorithm has lower computing complexity than most existing algorithms. Considering that frequent changing of attributes may lead to quality loss of the recovered audio signal, this paper also design a modified subsequent process to help the whole classification system reach an accurate rate as high as 97% which is comparable to classical 99%.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=ID3%20Decision%20Tree" title="ID3 Decision Tree">ID3 Decision Tree</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=Orchestra%2FPercussion%20Classification" title=" Orchestra/Percussion Classification"> Orchestra/Percussion Classification</a>, <a href="https://publications.waset.org/search?q=USAC" title=" USAC"> USAC</a> </p> <a href="https://publications.waset.org/16076/orchestrapercussion-classification-algorithm-for-united-speech-audio-coding-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/16076/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/16076/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/16076/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/16076/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/16076/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/16076/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/16076/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/16076/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/16076/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/16076/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/16076.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1673</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8687</span> Improved Closed Set Text-Independent Speaker Identification by Combining MFCC with Evidence from Flipped Filter Banks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sandipan%20Chakroborty">Sandipan Chakroborty</a>, <a href="https://publications.waset.org/search?q=Anindya%20Roy"> Anindya Roy</a>, <a href="https://publications.waset.org/search?q=Goutam%20Saha"> Goutam Saha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This paper proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature set outperforms baseline MFCC significantly. This proposition is validated by experiments conducted on two different kinds of public databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian Mixture Models (GMM) as a Classifier for various model orders.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Complementary%20Information" title="Complementary Information">Complementary Information</a>, <a href="https://publications.waset.org/search?q=Filter%20Bank" title=" Filter Bank"> Filter Bank</a>, <a href="https://publications.waset.org/search?q=GMM" title=" GMM"> GMM</a>, <a href="https://publications.waset.org/search?q=IMFCC" title=" IMFCC"> IMFCC</a>, <a href="https://publications.waset.org/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/search?q=Speaker%20Identification" title=" Speaker Identification"> Speaker Identification</a>, <a href="https://publications.waset.org/search?q=Speaker%20Recognition." title=" Speaker Recognition."> Speaker Recognition.</a> </p> <a href="https://publications.waset.org/3580/improved-closed-set-text-independent-speaker-identification-by-combining-mfcc-with-evidence-from-flipped-filter-banks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3580/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3580/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3580/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3580/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3580/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3580/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3580/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3580/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3580/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3580/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2295</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8686</span> Practical Method for Digital Music Matching Robust to Various Sound Qualities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Bokyung%20Sung">Bokyung Sung</a>, <a href="https://publications.waset.org/search?q=Jungsoo%20Kim"> Jungsoo Kim</a>, <a href="https://publications.waset.org/search?q=Jinman%20Kwun"> Jinman Kwun</a>, <a href="https://publications.waset.org/search?q=Junhyung%20Park"> Junhyung Park</a>, <a href="https://publications.waset.org/search?q=Jihye%20Ryeo"> Jihye Ryeo</a>, <a href="https://publications.waset.org/search?q=Ilju%20Ko"> Ilju Ko</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose a practical digital music matching system that is robust to variation in sound qualities. The proposed system is subdivided into two parts: client and server. The client part consists of the input, preprocessing and feature extraction modules. The preprocessing module, including the music onset module, revises the value gap occurring on the time axis between identical songs of different formats. The proposed method uses delta-grouped Mel frequency cepstral coefficients (MFCCs) to extract music features that are robust to changes in sound quality. According to the number of sound quality formats (SQFs) used, a music server is constructed with a feature database (FD) that contains different sub feature databases (SFDs). When the proposed system receives a music file, the selection module selects an appropriate SFD from a feature database; the selected SFD is subsequently used by the matching module. In this study, we used 3,000 queries for matching experiments in three cases with different FDs. In each case, we used 1,000 queries constructed by mixing 8 SQFs and 125 songs. The success rate of music matching improved from 88.6% when using single a single SFD to 93.2% when using quadruple SFDs. By this experiment, we proved that the proposed method is robust to various sound qualities.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Digital%20Music" title="Digital Music">Digital Music</a>, <a href="https://publications.waset.org/search?q=Music%20Matching" title=" Music Matching"> Music Matching</a>, <a href="https://publications.waset.org/search?q=Variation%20in%20Sound%20Qualities" title=" Variation in Sound Qualities"> Variation in Sound Qualities</a>, <a href="https://publications.waset.org/search?q=Robust%20Matching%20method." title=" Robust Matching method."> Robust Matching method.</a> </p> <a href="https://publications.waset.org/9344/practical-method-for-digital-music-matching-robust-to-various-sound-qualities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9344/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9344/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9344/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9344/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9344/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9344/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9344/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9344/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9344/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9344/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9344.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1370</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8685</span> Applications of Support Vector Machines on Smart Phone Systems for Emotional Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Wernhuar%20Tarng">Wernhuar Tarng</a>, <a href="https://publications.waset.org/search?q=Yuan-Yuan%20Chen"> Yuan-Yuan Chen</a>, <a href="https://publications.waset.org/search?q=Chien-Lung%20Li"> Chien-Lung Li</a>, <a href="https://publications.waset.org/search?q=Kun-Rong%20Hsie"> Kun-Rong Hsie</a>, <a href="https://publications.waset.org/search?q=Mingteh%20Chen"> Mingteh Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An emotional speech recognition system for the applications on smart phones was proposed in this study to combine with 3G mobile communications and social networks to provide users and their groups with more interaction and care. This study developed a mechanism using the support vector machines (SVM) to recognize the emotions of speech such as happiness, anger, sadness and normal. The mechanism uses a hierarchical classifier to adjust the weights of acoustic features and divides various parameters into the categories of energy and frequency for training. In this study, 28 commonly used acoustic features including pitch and volume were proposed for training. In addition, a time-frequency parameter obtained by continuous wavelet transforms was also used to identify the accent and intonation in a sentence during the recognition process. The Berlin Database of Emotional Speech was used by dividing the speech into male and female data sets for training. According to the experimental results, the accuracies of male and female test sets were increased by 4.6% and 5.2% respectively after using the time-frequency parameter for classifying happy and angry emotions. For the classification of all emotions, the average accuracy, including male and female data, was 63.5% for the test set and 90.9% for the whole data set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Smart%20phones" title="Smart phones">Smart phones</a>, <a href="https://publications.waset.org/search?q=emotional%20speech%20recognition" title=" emotional speech recognition"> emotional speech recognition</a>, <a href="https://publications.waset.org/search?q=socialnetworks" title=" socialnetworks"> socialnetworks</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/search?q=time-frequency%20parameter" title=" time-frequency parameter"> time-frequency parameter</a>, <a href="https://publications.waset.org/search?q=Mel-scale%20frequency%20cepstral%20coefficients%20%28MFCC%29." title="Mel-scale frequency cepstral coefficients (MFCC).">Mel-scale frequency cepstral coefficients (MFCC).</a> </p> <a href="https://publications.waset.org/9314/applications-of-support-vector-machines-on-smart-phone-systems-for-emotional-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9314/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9314/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9314/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9314/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9314/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9314/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9314/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9314/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9314/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9314/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9314.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1842</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8684</span> Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Suman%20Senapati">Suman Senapati</a>, <a href="https://publications.waset.org/search?q=Goutam%20Saha"> Goutam Saha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Speaker%20Identification" title="Speaker Identification">Speaker Identification</a>, <a href="https://publications.waset.org/search?q=Log%20Gabor%20Wavelet" title=" Log Gabor Wavelet"> Log Gabor Wavelet</a>, <a href="https://publications.waset.org/search?q=Bayesian%20Bivariate%20Estimator" title=" Bayesian Bivariate Estimator"> Bayesian Bivariate Estimator</a>, <a href="https://publications.waset.org/search?q=Circularly%20Symmetric%20Probability%0ADensity%20Function" title=" Circularly Symmetric Probability Density Function"> Circularly Symmetric Probability Density Function</a>, <a href="https://publications.waset.org/search?q=SIRP." title=" SIRP."> SIRP.</a> </p> <a href="https://publications.waset.org/11816/speaker-identification-by-joint-statistical-characterization-in-the-log-gabor-wavelet-domain" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11816/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11816/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11816/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11816/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11816/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11816/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11816/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11816/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11816/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11816/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1651</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8683</span> Application of Subversion Analysis in the Search for the Causes of Cracking in a Marine Engine Injector Nozzle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Leszek%20Chybowski">Leszek Chybowski</a>, <a href="https://publications.waset.org/search?q=Artur%20Bejger"> Artur Bejger</a>, <a href="https://publications.waset.org/search?q=Katarzyna%20Gawdzi%C5%84ska"> Katarzyna Gawdzińska</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Subversion analysis is a tool used in the TRIZ (Theory of Inventive Problem Solving) methodology. This article introduces the history and describes the process of subversion analysis, as well as function analysis and analysis of the resources, used at the design stage when generating possible undesirable situations. The article charts the course of subversion analysis when applied to a fuel injection nozzle of a marine engine. The work describes the fuel injector nozzle as a technological system and presents principles of analysis for the causes of a cracked tip of the nozzle body. The system is modelled with functional analysis. A search for potential causes of the damage is undertaken and a cause-and-effect analysis for various hypotheses concerning the damage is drawn up. The importance of particular hypotheses is evaluated and the most likely causes of damage identified.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Complex%20technical%20system" title="Complex technical system">Complex technical system</a>, <a href="https://publications.waset.org/search?q=fuel%20injector" title=" fuel injector"> fuel injector</a>, <a href="https://publications.waset.org/search?q=function%20analysis" title=" function analysis"> function analysis</a>, <a href="https://publications.waset.org/search?q=importance%20analysis" title=" importance analysis"> importance analysis</a>, <a href="https://publications.waset.org/search?q=resource%20analysis" title=" resource analysis"> resource analysis</a>, <a href="https://publications.waset.org/search?q=sabotage%20analysis" title=" sabotage analysis"> sabotage analysis</a>, <a href="https://publications.waset.org/search?q=subversion%20analysis" title=" subversion analysis"> subversion analysis</a>, <a href="https://publications.waset.org/search?q=TRIZ." title=" TRIZ."> TRIZ.</a> </p> <a href="https://publications.waset.org/10008702/application-of-subversion-analysis-in-the-search-for-the-causes-of-cracking-in-a-marine-engine-injector-nozzle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008702/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008702/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008702/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008702/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008702/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008702/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008702/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008702/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008702/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008702/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008702.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1189</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8682</span> Biomechanics Analysis When Delivering Baby</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kristyanto%20B.">Kristyanto B.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Plenty of analyses based on Biomechanics were carried out on many jobs in manufactures or services. Now Biomechanics analysis is being applied on mothers who are giving birth. The analysis conducted in terms of normal condition of the birth process without Gyn Bed (Obstetric Bed). The aim of analysis is to study whether it is risky or not when choosing the position of mother’s postures when delivering the baby. This investigation was applied on two positions that generally appear in common birth process. Results will show the analysis of both positions to support the birth process based on the Biomechanics analysis (Ergonomic approaches). </p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biomechanics%20analysis" title="Biomechanics analysis">Biomechanics analysis</a>, <a href="https://publications.waset.org/search?q=Birth%20process" title=" Birth process"> Birth process</a>, <a href="https://publications.waset.org/search?q=Position%20of%20postures%20analysis" title=" Position of postures analysis"> Position of postures analysis</a>, <a href="https://publications.waset.org/search?q=Ergonomic%20approaches." title=" Ergonomic approaches."> Ergonomic approaches.</a> </p> <a href="https://publications.waset.org/17257/biomechanics-analysis-when-delivering-baby" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/17257/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/17257/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/17257/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/17257/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/17257/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/17257/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/17257/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/17257/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/17257/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/17257/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/17257.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2302</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8681</span> Joint Use of Factor Analysis (FA) and Data Envelopment Analysis (DEA) for Ranking of Data Envelopment Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Reza%20Nadimi">Reza Nadimi</a>, <a href="https://publications.waset.org/search?q=Fariborz%20Jolai"> Fariborz Jolai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article combines two techniques: data envelopment analysis (DEA) and Factor analysis (FA) to data reduction in decision making units (DMU). Data envelopment analysis (DEA), a popular linear programming technique is useful to rate comparatively operational efficiency of decision making units (DMU) based on their deterministic (not necessarily stochastic) input–output data and factor analysis techniques, have been proposed as data reduction and classification technique, which can be applied in data envelopment analysis (DEA) technique for reduction input – output data. Numerical results reveal that the new approach shows a good consistency in ranking with DEA. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Effectiveness" title="Effectiveness">Effectiveness</a>, <a href="https://publications.waset.org/search?q=Decision%20Making" title=" Decision Making"> Decision Making</a>, <a href="https://publications.waset.org/search?q=Data%20EnvelopmentAnalysis" title=" Data EnvelopmentAnalysis"> Data EnvelopmentAnalysis</a>, <a href="https://publications.waset.org/search?q=Factor%20Analysis" title=" Factor Analysis"> Factor Analysis</a> </p> <a href="https://publications.waset.org/13961/joint-use-of-factor-analysis-fa-and-data-envelopment-analysis-dea-for-ranking-of-data-envelopment-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13961/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13961/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13961/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13961/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13961/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13961/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13961/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13961/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13961/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13961/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13961.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2425</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=290">290</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=291">291</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=cepstral%20analysis&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>