CINXE.COM
Search results for: speaker verification
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: speaker verification</title> <meta name="description" content="Search results for: speaker verification"> <meta name="keywords" content="speaker verification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="speaker verification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="speaker verification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 711</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: speaker verification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">711</span> Effect of Clinical Depression on Automatic Speaker Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sheeraz%20Memon">Sheeraz Memon</a>, <a href="https://publications.waset.org/abstracts/search?q=Namunu%20C.%20Maddage"> Namunu C. Maddage</a>, <a href="https://publications.waset.org/abstracts/search?q=Margaret%20Lech"> Margaret Lech</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicholas%20Allen"> Nicholas Allen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The effect of a clinical environment on the accuracy of the speaker verification was tested. The speaker verification tests were performed within homogeneous environments containing clinically depressed speakers only, and non-depresses speakers only, as well as within mixed environments containing different mixtures of both climatically depressed and non-depressed speakers. The speaker verification framework included the MFCCs features and the GMM modeling and classification method. The speaker verification experiments within homogeneous environments showed 5.1% increase of the EER within the clinically depressed environment when compared to the non-depressed environment. It indicated that the clinical depression increases the intra-speaker variability and makes the speaker verification task more challenging. Experiments with mixed environments indicated that the increase of the percentage of the depressed individuals within a mixed environment increases the speaker verification equal error rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title="speaker verification">speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=GMM" title=" GMM"> GMM</a>, <a href="https://publications.waset.org/abstracts/search?q=EM" title=" EM"> EM</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20environment" title=" clinical environment"> clinical environment</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20depression" title=" clinical depression"> clinical depression</a> </p> <a href="https://publications.waset.org/abstracts/39436/effect-of-clinical-depression-on-automatic-speaker-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">710</span> Developed Text-Independent Speaker Verification System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Arif">Mohammed Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdessalam%20Kifouche"> Abdessalam Kifouche</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech is a very convenient way of communication between people and machines. It conveys information about the identity of the talker. Since speaker recognition technology is increasingly securing our everyday lives, the objective of this paper is to develop two automatic text-independent speaker verification systems (TI SV) using low-level spectral features and machine learning methods. (i) The first system is based on a support vector machine (SVM), which was widely used in voice signal processing with the aim of speaker recognition involving verifying the identity of the speaker based on its voice characteristics, and (ii) the second is based on Gaussian Mixture Model (GMM) and Universal Background Model (UBM) to combine different functions from different resources to implement the SVM based. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title="speaker verification">speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=text-independent" title=" text-independent"> text-independent</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20analysis" title=" cepstral analysis"> cepstral analysis</a> </p> <a href="https://publications.waset.org/abstracts/183493/developed-text-independent-speaker-verification-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">58</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">709</span> Modified Form of Margin Based Angular Softmax Loss for Speaker Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamshaid%20ul%20Rahman">Jamshaid ul Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Akhter%20Ali"> Akhter Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Adnan%20Manzoor"> Adnan Manzoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learning-based systems have received increasing interest in recent years; recognition structures, including end-to-end speak recognition, are one of the hot topics in this area. A famous work on end-to-end speaker verification by using Angular Softmax Loss gained significant importance and is considered useful to directly trains a discriminative model instead of the traditional adopted i-vector approach. The margin-based strategy in angular softmax is beneficial to learn discriminative speaker embeddings where the random selection of margin values is a big issue in additive angular margin and multiplicative angular margin. As a better solution in this matter, we present an alternative approach by introducing a bit similar form of an additive parameter that was originally introduced for face recognition, and it has a capacity to adjust automatically with the corresponding margin values and is applicable to learn more discriminative features than the Softmax. Experiments are conducted on the part of Fisher dataset, where it observed that the additive parameter with angular softmax to train the front-end and probabilistic linear discriminant analysis (PLDA) in the back-end boosts the performance of the structure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=additive%20parameter" title="additive parameter">additive parameter</a>, <a href="https://publications.waset.org/abstracts/search?q=angular%20softmax" title=" angular softmax"> angular softmax</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title=" speaker verification"> speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=PLDA" title=" PLDA"> PLDA</a> </p> <a href="https://publications.waset.org/abstracts/152915/modified-form-of-margin-based-angular-softmax-loss-for-speaker-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">708</span> Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Kamil%20Hasan%20Al-Ali">Ahmed Kamil Hasan Al-Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Bouchra%20Senadji"> Bouchra Senadji</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Naik"> Ganesh Naik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping-ICA achieves a reduction in equal error rate about (48.22%, 44.66%, and 50.07%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=noisy%20forensic%20speaker%20verification" title="noisy forensic speaker verification">noisy forensic speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=ICA%20algorithm" title=" ICA algorithm"> ICA algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC%20feature%20warping" title=" MFCC feature warping"> MFCC feature warping</a> </p> <a href="https://publications.waset.org/abstracts/66332/forensic-speaker-verification-in-noisy-environmental-by-enhancing-the-speech-signal-using-ica-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">707</span> A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Alwosheel">Ahmad Alwosheel</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Alqaraawi"> Ahmed Alqaraawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=diarization" title=" diarization"> diarization</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20indexing" title=" speaker indexing"> speaker indexing</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20segmentation" title=" speaker segmentation"> speaker segmentation</a> </p> <a href="https://publications.waset.org/abstracts/27191/a-two-step-framework-for-unsupervised-speaker-segmentation-using-bic-and-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27191.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">502</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">706</span> Novel Formal Verification Based Coverage Augmentation Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Surinder%20Sood">Surinder Sood</a>, <a href="https://publications.waset.org/abstracts/search?q=Debajyoti%20Mukherjee"> Debajyoti Mukherjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Formal verification techniques have become widely popular in pre-silicon verification as an alternate to constrain random simulation based techniques. This paper proposed a novel formal verification-based coverage augmentation technique in verifying complex RTL functional verification faster. The proposed approach relies on augmenting coverage analysis coming from simulation and formal verification. Besides this, the functional qualification framework not only helps in improving the coverage at a faster pace but also aids in maturing and qualifying the formal verification infrastructure. The proposed technique has helped to achieve faster verification sign-off, resulting in faster time-to-market. The design picked had a complex control and data path and had many configurable options to meet multiple specification needs. The flow is generic, and tool independent, thereby leveraging across the projects and design will be much easier <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=COI%20%28cone%20of%20influence%29" title="COI (cone of influence)">COI (cone of influence)</a>, <a href="https://publications.waset.org/abstracts/search?q=coverage" title=" coverage"> coverage</a>, <a href="https://publications.waset.org/abstracts/search?q=formal%20verification" title=" formal verification"> formal verification</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20injection" title=" fault injection"> fault injection</a> </p> <a href="https://publications.waset.org/abstracts/159250/novel-formal-verification-based-coverage-augmentation-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159250.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">705</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extreme%20learning" title="extreme learning">extreme learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a> </p> <a href="https://publications.waset.org/abstracts/112384/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">704</span> The Effect of The Speaker's Speaking Style as A Factor of Understanding and Comfort of The Listener</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Made%20Rahayu%20Putri%20Saron">Made Rahayu Putri Saron</a>, <a href="https://publications.waset.org/abstracts/search?q=Mochamad%20Nizar%20Palefi%20Ma%E2%80%99ady"> Mochamad Nizar Palefi Ma’ady</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication skills are important in everyday life, communication can be done verbally in the form of oral or written and nonverbal in the form of expressions or body movements. Good communication should be able to provide information clearly, and there is feedback from the speaker and listener. However, it is often found that the information conveyed is not clear, and there is no feedback from the listeners, so it cannot be ensured that the communication is effective and understandable. The speaker's understanding of the topic is one of the supporting factors for the listener to be able to accept the meaning of the conversation. However, based on the results of the literature review, it found that the influence factors of person speaking style are as follows: (i) environmental conditions; (ii) voice, articulation, and accent; (iii) gender; (iv) personality; (v) speech disorders (Dysarthria); when speaking also have an important influence on speaker’s speaking style. It can be concluded the factors that support understanding and comfort of the listener are dependent on the nature of the speaker (environmental conditions, voice, gender, personality) or also it the speaker have speech disorders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=listener" title="listener">listener</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20speaking" title=" public speaking"> public speaking</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking%20style" title=" speaking style"> speaking style</a>, <a href="https://publications.waset.org/abstracts/search?q=understanding" title=" understanding"> understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20comfortable%20factor" title=" and comfortable factor"> and comfortable factor</a> </p> <a href="https://publications.waset.org/abstracts/145442/the-effect-of-the-speakers-speaking-style-as-a-factor-of-understanding-and-comfort-of-the-listener" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">703</span> Multi-Modal Feature Fusion Network for Speaker Recognition Task</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiang%20Shijie">Xiang Shijie</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhou%20Dong"> Zhou Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tian%20Dan"> Tian Dan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaker recognition is a crucial task in the field of speech processing, aimed at identifying individuals based on their vocal characteristics. However, existing speaker recognition methods face numerous challenges. Traditional methods primarily rely on audio signals, which often suffer from limitations in noisy environments, variations in speaking style, and insufficient sample sizes. Additionally, relying solely on audio features can sometimes fail to capture the unique identity of the speaker comprehensively, impacting recognition accuracy. To address these issues, we propose a multi-modal network architecture that simultaneously processes both audio and text signals. By gradually integrating audio and text features, we leverage the strengths of both modalities to enhance the robustness and accuracy of speaker recognition. Our experiments demonstrate significant improvements with this multi-modal approach, particularly in complex environments, where recognition performance has been notably enhanced. Our research not only highlights the limitations of current speaker recognition methods but also showcases the effectiveness of multi-modal fusion techniques in overcoming these limitations, providing valuable insights for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title="feature fusion">feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=memory%20network" title=" memory network"> memory network</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20input" title=" multimodal input"> multimodal input</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a> </p> <a href="https://publications.waset.org/abstracts/191527/multi-modal-feature-fusion-network-for-speaker-recognition-task" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191527.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">702</span> Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyung-Jin%20Kim">Hyung-Jin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dae-Wan%20Kim"> Dae-Wan Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Moo-Yeon%20Lee"> Moo-Yeon Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heat%20transfer" title="heat transfer">heat transfer</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20coil" title=" voice coil"> voice coil</a>, <a href="https://publications.waset.org/abstracts/search?q=woofer%20speaker" title=" woofer speaker"> woofer speaker</a> </p> <a href="https://publications.waset.org/abstracts/5142/experimental-study-on-the-heat-transfer-characteristics-of-the-200w-class-woofer-speaker" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">701</span> USE-Net: SE-Block Enhanced U-Net Architecture for Robust Speaker Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kilari%20Nikhil">Kilari Nikhil</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankur%20Tibrewal"> Ankur Tibrewal</a>, <a href="https://publications.waset.org/abstracts/search?q=Srinivas%20Kruthiventi%20S.%20S."> Srinivas Kruthiventi S. S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conventional speaker identification systems often fall short of capturing the diverse variations present in speech data due to fixed-scale architectures. In this research, we propose a CNN-based architecture, USENet, designed to overcome these limitations. Leveraging two key techniques, our approach achieves superior performance on the VoxCeleb 1 Dataset without any pre-training. Firstly, we adopt a U-net-inspired design to extract features at multiple scales, empowering our model to capture speech characteristics effectively. Secondly, we introduce the squeeze and excitation block to enhance spatial feature learning. The proposed architecture showcases significant advancements in speaker identification, outperforming existing methods, and holds promise for future research in this domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20feature%20extraction" title="multi-scale feature extraction">multi-scale feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=squeeze%20and%20excitation" title=" squeeze and excitation"> squeeze and excitation</a>, <a href="https://publications.waset.org/abstracts/search?q=VoxCeleb1%20speaker%20identification" title=" VoxCeleb1 speaker identification"> VoxCeleb1 speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=mel-spectrograms" title=" mel-spectrograms"> mel-spectrograms</a>, <a href="https://publications.waset.org/abstracts/search?q=USENet" title=" USENet"> USENet</a> </p> <a href="https://publications.waset.org/abstracts/170441/use-net-se-block-enhanced-u-net-architecture-for-robust-speaker-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">700</span> Formal Verification of Cache System Using a Novel Cache Memory Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guowei%20Hou">Guowei Hou</a>, <a href="https://publications.waset.org/abstracts/search?q=Lixin%20Yu"> Lixin Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhuang"> Wei Zhuang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hui%20Qin"> Hui Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Xue%20Yang"> Xue Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Formal verification is proposed to ensure the correctness of the design and make functional verification more efficient. As cache plays a vital role in the design of System on Chip (SoC), and cache with Memory Management Unit (MMU) and cache memory unit makes the state space too large for simulation to verify, then a formal verification is presented for such system design. In the paper, a formal model checking verification flow is suggested and a new cache memory model which is called “exhaustive search model” is proposed. Instead of using large size ram to denote the whole cache memory, exhaustive search model employs just two cache blocks. For cache system contains data cache (Dcache) and instruction cache (Icache), Dcache memory model and Icache memory model are established separately using the same mechanism. At last, the novel model is employed to the verification of a cache which is module of a custom-built SoC system that has been applied in practical, and the result shows that the cache system is verified correctly using the exhaustive search model, and it makes the verification much more manageable and flexible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cache%20system" title="cache system">cache system</a>, <a href="https://publications.waset.org/abstracts/search?q=formal%20verification" title=" formal verification"> formal verification</a>, <a href="https://publications.waset.org/abstracts/search?q=novel%20model" title=" novel model"> novel model</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20on%20chip%20%28SoC%29" title=" system on chip (SoC)"> system on chip (SoC)</a> </p> <a href="https://publications.waset.org/abstracts/26581/formal-verification-of-cache-system-using-a-novel-cache-memory-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26581.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">496</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">699</span> Functional and Stimuli Implementation and Verification of Programmable Peripheral Interface (PPI) Protocol</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20N.%20Joshi">N. N. Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20K.%20Singh"> G. K. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present the stimuli implementation and verification of a Programmable Peripheral Interface (PPI) 8255. It involves a designing and verification of configurable intellectual property (IP) module of PPI protocol using Verilog HDL for implementation part and System Verilog for verification. The overview of the PPI-8255 presented then the design specification implemented for the work following the functional description and pin configuration of PPI-8255. The coverage report of design shows that our design and verification environment covered 100% functionality in accordance with the design specification generated by the Questa Sim 10.0b. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Programmable%20Peripheral%20Interface%20%28PPI%29" title="Programmable Peripheral Interface (PPI)">Programmable Peripheral Interface (PPI)</a>, <a href="https://publications.waset.org/abstracts/search?q=verilog%20HDL" title=" verilog HDL"> verilog HDL</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20verilog" title=" system verilog"> system verilog</a>, <a href="https://publications.waset.org/abstracts/search?q=questa%20sim" title=" questa sim "> questa sim </a> </p> <a href="https://publications.waset.org/abstracts/21194/functional-and-stimuli-implementation-and-verification-of-programmable-peripheral-interface-ppi-protocol" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21194.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">522</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">698</span> Signature Verification System for a Banking Business Process Management</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Rahaf">A. Rahaf</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Liyakathunsia"> S. Liyakathunsia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today’s world, unprecedented operational pressure is faced by banks that test the efficiency, effectiveness, and agility of their business processes. In a typical banking process, a person’s authorization is usually based on his signature on most all of the transactions. Signature verification is considered as one of the highly significant information needed for any bank document processing. Banks usually use Signature Verification to authenticate the identity of individuals. In this paper, a business process model has been proposed in order to increase the quality of the verification process and to reduce time and needed resources. In order to understand the current process, a survey has been conducted and distributed among bank employees. After analyzing the survey, a process model has been created using Bizagi modeler which helps in simulating the process after assigning time and cost of it. The outcomes show that the automation of signature verification process is highly recommended for a banking business process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=business%20process%20management" title="business process management">business process management</a>, <a href="https://publications.waset.org/abstracts/search?q=process%20modeling" title=" process modeling"> process modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=quality" title=" quality"> quality</a>, <a href="https://publications.waset.org/abstracts/search?q=Signature%20Verification" title=" Signature Verification"> Signature Verification</a> </p> <a href="https://publications.waset.org/abstracts/67664/signature-verification-system-for-a-banking-business-process-management" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67664.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">425</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">697</span> An Encapsulation of a Navigable Tree Position: Theory, Specification, and Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nicodemus%20M.%20J.%20Mbwambo">Nicodemus M. J. Mbwambo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Shan%20Sun"> Yu-Shan Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Murali%20Sitaraman"> Murali Sitaraman</a>, <a href="https://publications.waset.org/abstracts/search?q=Joan%20Krone"> Joan Krone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a generic data abstraction that captures a navigable tree position. The mathematical modeling of the abstraction encapsulates the current tree position, which can be used to navigate and modify the tree. The encapsulation of the tree position in the data abstraction specification avoids the use of explicit references and aliasing, thereby simplifying verification of (imperative) client code that uses the data abstraction. To ease the tasks of such specification and verification, a general tree theory, rich with mathematical notations and results, has been developed. The paper contains an example to illustrate automated verification ramifications. With sufficient tree theory development, automated proving seems plausible even in the absence of a special-purpose tree solver. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automation" title="automation">automation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20abstraction" title=" data abstraction"> data abstraction</a>, <a href="https://publications.waset.org/abstracts/search?q=maps" title=" maps"> maps</a>, <a href="https://publications.waset.org/abstracts/search?q=specification" title=" specification"> specification</a>, <a href="https://publications.waset.org/abstracts/search?q=tree" title=" tree"> tree</a>, <a href="https://publications.waset.org/abstracts/search?q=verification" title=" verification"> verification</a> </p> <a href="https://publications.waset.org/abstracts/131080/an-encapsulation-of-a-navigable-tree-position-theory-specification-and-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131080.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">696</span> Pyramid Binary Pattern for Age Invariant Face Verification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saroj%20Bijarnia">Saroj Bijarnia</a>, <a href="https://publications.waset.org/abstracts/search?q=Preety%20Singh"> Preety Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a simple and effective biometrics system based on face verification across aging using a new variant of texture feature, Pyramid Binary Pattern. This employs Local Binary Pattern along with its hierarchical information. Dimension reduction of generated texture feature vector is done using Principal Component Analysis. Support Vector Machine is used for classification. Our proposed method achieves an accuracy of 92:24% and can be used in an automated age-invariant face verification system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=age%20invariant" title=" age invariant"> age invariant</a>, <a href="https://publications.waset.org/abstracts/search?q=verification" title=" verification"> verification</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/64435/pyramid-binary-pattern-for-age-invariant-face-verification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64435.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">351</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">695</span> An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ben%20Soltane%20Cheima">Ben Soltane Cheima</a>, <a href="https://publications.waset.org/abstracts/search?q=Ittansa%20Yonas%20Kelbesa"> Ittansa Yonas Kelbesa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20modeling" title=" speaker modeling"> speaker modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20matching" title=" feature matching"> feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel%20frequency%20cepstrum%20coefficient%20%28MFCC%29" title=" Mel frequency cepstrum coefficient (MFCC)"> Mel frequency cepstrum coefficient (MFCC)</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model%20%28GMM%29" title=" Gaussian mixture model (GMM)"> Gaussian mixture model (GMM)</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20quantization%20%28VQ%29" title=" vector quantization (VQ)"> vector quantization (VQ)</a>, <a href="https://publications.waset.org/abstracts/search?q=Linde-Buzo-Gray%20%28LBG%29" title=" Linde-Buzo-Gray (LBG)"> Linde-Buzo-Gray (LBG)</a>, <a href="https://publications.waset.org/abstracts/search?q=expectation%20maximization%20%28EM%29" title=" expectation maximization (EM)"> expectation maximization (EM)</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-processing" title=" pre-processing"> pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detection%20%28VAD%29" title=" voice activity detection (VAD)"> voice activity detection (VAD)</a>, <a href="https://publications.waset.org/abstracts/search?q=short%20time%20energy%20%28STE%29" title=" short time energy (STE)"> short time energy (STE)</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20noise%20statistical%20modeling" title=" background noise statistical modeling"> background noise statistical modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=closed-set%20tex-independent%20speaker%20identification%20system%20%28CISI%29" title=" closed-set tex-independent speaker identification system (CISI)"> closed-set tex-independent speaker identification system (CISI)</a> </p> <a href="https://publications.waset.org/abstracts/16253/an-intelligent-text-independent-speaker-identification-using-vq-gmm-model-based-multiple-classifier-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">309</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">694</span> Physical Verification Flow on Multiple Foundries</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rohaya%20Abdul%20Wahab">Rohaya Abdul Wahab</a>, <a href="https://publications.waset.org/abstracts/search?q=Raja%20Mohd%20Fuad%20Tengku%20Aziz"> Raja Mohd Fuad Tengku Aziz</a>, <a href="https://publications.waset.org/abstracts/search?q=Nazaliza%20Othman"> Nazaliza Othman</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharifah%20Saleh"> Sharifah Saleh</a>, <a href="https://publications.waset.org/abstracts/search?q=Nabihah%20Razali"> Nabihah Razali</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Al%20Baqir%20Zinal%20Abidin"> Muhammad Al Baqir Zinal Abidin</a>, <a href="https://publications.waset.org/abstracts/search?q=Md%20Hanif%20Md%20Nasir"> Md Hanif Md Nasir </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper will discuss how we optimize our physical verification flow in our IC Design Department having various rule decks from multiple foundries. Our ultimate goal is to achieve faster time to tape-out and avoid schedule delay. Currently the physical verification runtimes and memory usage have drastically increased with the increasing number of design rules, design complexity and the size of the chips to be verified. To manage design violations, we use a number of solutions to reduce the amount of violations needed to be checked by physical verification engineers. The most important functions in physical verifications are DRC (design rule check), LVS (layout vs. schematic) and XRC (extraction). Since we have a multiple number of foundries for our design tape-outs, we need a flow that improve the overall turnaround time and ease of use of the physical verification process. The demand for fast turnaround time is even more critical since the physical design is the last stage before sending the layout to the foundries. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=physical%20verification" title="physical verification">physical verification</a>, <a href="https://publications.waset.org/abstracts/search?q=DRC" title=" DRC"> DRC</a>, <a href="https://publications.waset.org/abstracts/search?q=LVS" title=" LVS"> LVS</a>, <a href="https://publications.waset.org/abstracts/search?q=XRC" title=" XRC"> XRC</a>, <a href="https://publications.waset.org/abstracts/search?q=flow" title=" flow"> flow</a>, <a href="https://publications.waset.org/abstracts/search?q=foundry" title=" foundry"> foundry</a>, <a href="https://publications.waset.org/abstracts/search?q=runset" title=" runset"> runset</a> </p> <a href="https://publications.waset.org/abstracts/29033/physical-verification-flow-on-multiple-foundries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29033.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">654</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">693</span> A Reduced Distributed Sate Space for Modular Petri Nets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sawsen%20Khlifa">Sawsen Khlifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Chiheb%20AMeur%20Abid"> Chiheb AMeur Abid</a>, <a href="https://publications.waset.org/abstracts/search?q=Belhassan%20Zouari"> Belhassan Zouari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modular verification approaches have been widely attempted to cope with the well known state explosion problem. This paper deals with the modular verification of modular Petri nets. We propose a reduced version for the modular state space of a given modular Petri net. The new structure allows the creation of smaller modular graphs. Each one draws the behavior of the corresponding module and outlines some global information. Hence, this version helps to overcome the explosion problem and to use less memory space. In this condensed structure, the verification of some generic properties concerning one module is limited to the exploration of its associated graph. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distributed%20systems" title="distributed systems">distributed systems</a>, <a href="https://publications.waset.org/abstracts/search?q=modular%20verification" title=" modular verification"> modular verification</a>, <a href="https://publications.waset.org/abstracts/search?q=petri%20nets" title=" petri nets"> petri nets</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20space%20explosition" title=" state space explosition"> state space explosition</a> </p> <a href="https://publications.waset.org/abstracts/148880/a-reduced-distributed-sate-space-for-modular-petri-nets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148880.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">115</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">692</span> The Difference of Learning Outcomes in Reading Comprehension between Text and Film as The Media in Indonesian Language for Foreign Speaker in Intermediate Level</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siti%20Ayu%20Ningsih">Siti Ayu Ningsih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to find the differences outcomes in learning reading comprehension with text and film as media on Indonesian Language for foreign speaker (BIPA) learning at intermediate level. By using quantitative and qualitative research methods, the respondent of this study is a single respondent from D'Royal Morocco Integrative Islamic School in grade nine from secondary level. Quantitative method used to calculate the learning outcomes that have been given the appropriate action cycle, whereas qualitative method used to translate the findings derived from quantitative methods to be described. The technique used in this study is the observation techniques and testing work. Based on the research, it is known that the use of the text media is more effective than the film for intermediate level of Indonesian Language for foreign speaker learner. This is because, when using film the learner does not have enough time to take note the difficult vocabulary and don't have enough time to look for the meaning of the vocabulary from the dictionary. While the use of media texts shows the better effectiveness because it does not require additional time to take note the difficult words. For the words that are difficult or strange, the learner can immediately find its meaning from the dictionary. The presence of the text is also very helpful for Indonesian Language for foreign speaker learner to find the answers according to the questions more easily. By matching the vocabulary of the question into the text references. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indonesian%20language%20for%20foreign%20speaker" title="Indonesian language for foreign speaker">Indonesian language for foreign speaker</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20outcome" title=" learning outcome"> learning outcome</a>, <a href="https://publications.waset.org/abstracts/search?q=media" title=" media"> media</a>, <a href="https://publications.waset.org/abstracts/search?q=reading%20comprehension" title=" reading comprehension"> reading comprehension</a> </p> <a href="https://publications.waset.org/abstracts/82676/the-difference-of-learning-outcomes-in-reading-comprehension-between-text-and-film-as-the-media-in-indonesian-language-for-foreign-speaker-in-intermediate-level" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82676.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">197</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">691</span> Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Surbhi%20Mathur">Surbhi Mathur</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20M.%20Vyas"> J. M. Vyas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Although the rapid development of forensic speaker recognition technology has been conducted, there are still many problems to be solved. The biggest problem arises when the cases involving disguised voice samples come across for the purpose of examination and identification. Such type of voice samples of anonymous callers is frequently encountered in crimes involving kidnapping, blackmailing, hoax extortion and many more, where the speaker makes a deliberate effort to manipulate their natural voice in order to conceal their identity due to the fear of being caught. Voice disguise causes serious damage to the natural vocal parameters of the speakers and thus complicates the process of identification. The sole objective of this doctoral project is to find out the possibility of rendering definite opinions in cases involving disguised speech by experimentally determining the effects of different disguise forms on personal identification and percentage rate of speaker recognition for various voice disguise techniques such as raised pitch, lower pitch, increased nasality, covering the mouth, constricting tract, obstacle in mouth etc by analyzing and comparing the amount of phonetic and acoustic variation in of artificial (disguised) and natural sample of an individual, by auditory as well as spectrographic analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=forensic" title="forensic">forensic</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=voice" title=" voice"> voice</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a>, <a href="https://publications.waset.org/abstracts/search?q=disguise" title=" disguise"> disguise</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a> </p> <a href="https://publications.waset.org/abstracts/47439/acoustic-analysis-for-comparison-and-identification-of-normal-and-disguised-speech-of-individuals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">690</span> Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Ajgou">R. Ajgou</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sbaa"> S. Sbaa</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghendir"> S. Ghendir</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Chemsa"> A. Chemsa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Taleb-Ahmed"> A. Taleb-Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title="speech enhancement">speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=pesq" title=" pesq"> pesq</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a> </p> <a href="https://publications.waset.org/abstracts/31102/comparative-methods-for-speech-enhancement-and-the-effects-on-text-independent-speaker-identification-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">689</span> Studying Second Language Learners' Language Behavior from Conversation Analysis Perspective</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yanyan%20Wang">Yanyan Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper on second language teaching and learning uses conversation analysis (CA) approach and focuses on how second language learners of Chinese do repair when making clarification requests. In order to demonstrate their behavior in interaction, a comparison was made to study the differences between native speakers of Chinese with non-native speakers of Chinese. The significance of the research is to make second language teachers and learners aware of repair and how to seek clarification. Utilizing the methodology of CA, the research involved two sets of naturally occurring recordings, one of native speaker students and the other of non-native speaker students. Both sets of recording were telephone talks between students and teachers. There were 50 native speaker students and 50 non-native speaker students. From multiple listening to the recordings, the parts with repairs for clarification were selected for analysis which included the moments in the talk when students had problems in understanding or hearing the speaker and had to seek clarification. For example, ‘Sorry, I do not understand ‘and ‘Can you repeat the question? ‘were the parts as repair to make clarification requests. In the data, there were 43 such cases from native speaker students and 88 cases from non-native speaker students. The non-native speaker students were more likely to use repair to seek clarification. Analysis on how the students make clarification requests during their conversation was carried out by investigating how the students initiated problems and how the teachers repaired the problems. In CA term, it is called other-initiated self-repair (OISR), which refers to student-initiated teacher-repair in this research. The findings show that, in initiating repair, native speaker students pay more attention to mutual understanding (inter-subjectivity) while non-native speaker students, due to their lack of language proficiency, pay more attention to their status of knowledge (epistemic) switch. There are three major differences: 1, native Chinese students more often initiate closed-class OISR (seeking specific information in the request) such as repeating a word or phrases from the previous turn while non-native students more frequently initiate open-class OISR (not specifying clarification) such as ‘sorry, I don’t understand ‘. 2, native speakers’ clarification requests are treated by the teacher as understanding of the content while non-native learners’ clarification requests are treated by teacher as language proficiency problem. 3, native speakers don’t see repair as knowledge issue and there is no third position in the repair sequences to close repair while non-native learners take repair sequence as a time to adjust their knowledge. There is clear closing third position token such as ‘oh ‘ to close repair sequence so that the topic can go back. In conclusion, this paper uses conversation analysis approach to compare differences between native Chinese speakers and non-native Chinese learners in their ways of conducting repair when making clarification requests. The findings are useful in future Chinese language teaching and learning, especially in teaching pragmatics such as requests. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conversation%20analysis%20%28CA%29" title="conversation analysis (CA)">conversation analysis (CA)</a>, <a href="https://publications.waset.org/abstracts/search?q=clarification%20request" title=" clarification request"> clarification request</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20%28L2%29" title=" second language (L2)"> second language (L2)</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching%20implication" title=" teaching implication"> teaching implication</a> </p> <a href="https://publications.waset.org/abstracts/74368/studying-second-language-learners-language-behavior-from-conversation-analysis-perspective" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74368.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">688</span> Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoji%20Yamato">Yoji Yamato</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=OpenStack" title="OpenStack">OpenStack</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud%20computing" title=" cloud computing"> cloud computing</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20verification" title=" automatic verification"> automatic verification</a>, <a href="https://publications.waset.org/abstracts/search?q=jenkins" title=" jenkins"> jenkins</a> </p> <a href="https://publications.waset.org/abstracts/17257/automatic-verification-technology-of-virtual-machine-software-patch-on-iaas-cloud" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17257.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">487</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">687</span> Formal Verification for Ethereum Smart Contract Using Coq</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xia%20Yang">Xia Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Yang"> Zheng Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Haiyong%20Sun"> Haiyong Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Fang"> Yan Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingyu%20Liu"> Jingyu Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jia%20Song"> Jia Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The smart contract in Ethereum is a unique program deployed on the Ethereum Virtual Machine (EVM) to help manage cryptocurrency. The security of this smart contract is critical to Ethereum’s operation and highly sensitive. In this paper, we present a formal model for smart contract, using the separated term-obligation (STO) strategy to formalize and verify the smart contract. We use the IBM smart sponsor contract (SSC) as an example to elaborate the detail of the formalizing process. We also propose a formal smart sponsor contract model (FSSCM) and verify SSC’s security properties with an interactive theorem prover Coq. We found the 'Unchecked-Send' vulnerability in the SSC, using our formal model and verification method. Finally, we demonstrate how we can formalize and verify other smart contracts with this approach, and our work indicates that this formal verification can effectively verify the correctness and security of smart contracts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=smart%20contract" title="smart contract">smart contract</a>, <a href="https://publications.waset.org/abstracts/search?q=formal%20verification" title=" formal verification"> formal verification</a>, <a href="https://publications.waset.org/abstracts/search?q=Ethereum" title=" Ethereum"> Ethereum</a>, <a href="https://publications.waset.org/abstracts/search?q=Coq" title=" Coq"> Coq</a> </p> <a href="https://publications.waset.org/abstracts/85595/formal-verification-for-ethereum-smart-contract-using-coq" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85595.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">691</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">686</span> Identity Verification Using k-NN Classifiers and Autistic Genetic Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fuad%20M.%20Alkoot">Fuad M. Alkoot</a> </p> <p class="card-text"><strong>Abstract:</strong></p> DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20data" title=" genetic data"> genetic data</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20verification" title=" identity verification"> identity verification</a>, <a href="https://publications.waset.org/abstracts/search?q=k%20nearest%20neighbor" title=" k nearest neighbor"> k nearest neighbor</a> </p> <a href="https://publications.waset.org/abstracts/75552/identity-verification-using-k-nn-classifiers-and-autistic-genetic-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">257</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">685</span> Failure Analysis and Verification Using an Integrated Method for Automotive Electric/Electronic Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lei%20Chen">Lei Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Jiao"> Jian Jiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Tingdi%20Zhao"> Tingdi Zhao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Failures of automotive electric/electronic systems, which are universally considered to be safety-critical and software-intensive, may cause catastrophic accidents. Analysis and verification of failures in these kinds of systems is a big challenge with increasing system complexity. Model-checking is often employed to allow formal verification by ensuring that the system model conforms to specified safety properties. The system-level effects of failures are established, and the effects on system behavior are observed through the formal verification. A hazard analysis technique, called Systems-Theoretic Process Analysis, is capable of identifying design flaws which may cause potential failure hazardous, including software and system design errors and unsafe interactions among multiple system components. This paper provides a concept on how to use model-checking integrated with Systems-Theoretic Process Analysis to perform failure analysis and verification of automotive electric/electronic systems. As a result, safety requirements are optimized, and failure propagation paths are found. Finally, an automotive electric/electronic system case study is used to verify the effectiveness and practicability of the method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=failure%20analysis%20and%20verification" title="failure analysis and verification">failure analysis and verification</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20checking" title=" model checking"> model checking</a>, <a href="https://publications.waset.org/abstracts/search?q=system-theoretic%20process%20analysis" title=" system-theoretic process analysis"> system-theoretic process analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=automotive%20electric%2Felectronic%20system" title=" automotive electric/electronic system"> automotive electric/electronic system</a> </p> <a href="https://publications.waset.org/abstracts/112321/failure-analysis-and-verification-using-an-integrated-method-for-automotive-electricelectronic-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112321.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">120</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">684</span> The Effect of Iconic and Beat Gestures on Memory Recall in Greek’s First and Second Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eleni%20Ioanna%20Levantinou">Eleni Ioanna Levantinou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gestures play a major role in comprehension and memory recall due to the fact that aid the efficient channel of the meaning and support listeners’ comprehension and memory. In the present study, the assistance of two kinds of gestures (iconic and beat gestures) is tested in regards to memory and recall. The hypothesis investigated here is whether or not iconic and beat gestures provide assistance in memory and recall in Greek and in Greek speakers’ second language. Two groups of participants were formed, one comprising Greeks that reside in Athens and one with Greeks that reside in Copenhagen. Three kinds of stimuli were used: A video with words accompanied with iconic gestures, a video with words accompanied with beat gestures and a video with words alone. The languages used are Greek and English. The words in the English videos were spoken by a native English speaker and by a Greek speaker talking English. The reason for this is that when it comes to beat gestures that serve a meta-cognitive function and are generated according to the intonation of a language, prosody plays a major role. Thus, participants that have different influences in prosody may generate different results from rhythmic gestures. Memory recall was assessed by asking the participants to try to remember as many words as they could after viewing each video. Results show that iconic gestures provide significant assistance in memory and recall in Greek and in English whether they are produced by a native or a second language speaker. In the case of beat gestures though, the findings indicate that beat gestures may not play such a significant role in Greek language. As far as intonation is concerned, a significant difference was not found in the case of beat gestures produced by a native English speaker and by a Greek speaker talking English. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=first%20language" title="first language">first language</a>, <a href="https://publications.waset.org/abstracts/search?q=gestures" title=" gestures"> gestures</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20acquisition" title=" second language acquisition"> second language acquisition</a> </p> <a href="https://publications.waset.org/abstracts/49317/the-effect-of-iconic-and-beat-gestures-on-memory-recall-in-greeks-first-and-second-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49317.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">683</span> Online Authenticity Verification of a Biometric Signature Using Dynamic Time Warping Method and Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ga%C5%82ka%20Aleksandra">Gałka Aleksandra</a>, <a href="https://publications.waset.org/abstracts/search?q=Jeli%C5%84ska%20Justyna"> Jelińska Justyna</a>, <a href="https://publications.waset.org/abstracts/search?q=Masiak%20Albert"> Masiak Albert</a>, <a href="https://publications.waset.org/abstracts/search?q=Walentukiewicz%20Krzysztof"> Walentukiewicz Krzysztof</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An offline signature is well-known however not the safest way to verify identity. Nowadays, to ensure proper authentication, i.e. in banking systems, multimodal verification is more widely used. In this paper the online signature analysis based on dynamic time warping (DTW) coupled with machine learning approaches has been presented. In our research signatures made with biometric pens were gathered. Signature features as well as their forgeries have been described. For verification of authenticity various methods were used including convolutional neural networks using DTW matrix and multilayer perceptron using sums of DTW matrix paths. System efficiency has been evaluated on signatures and signature forgeries collected on the same day. Results are presented and discussed in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20time%20warping" title="dynamic time warping">dynamic time warping</a>, <a href="https://publications.waset.org/abstracts/search?q=handwritten%20signature%20verification" title=" handwritten signature verification"> handwritten signature verification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature-based%20recognition" title=" feature-based recognition"> feature-based recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=online%20signature" title=" online signature"> online signature</a> </p> <a href="https://publications.waset.org/abstracts/153364/online-authenticity-verification-of-a-biometric-signature-using-dynamic-time-warping-method-and-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153364.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">682</span> Satellite Technology Usage for Greenhouse Gas Emissions Monitoring and Verification: Policy Considerations for an International System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Timiebi%20Aganaba-Jeanty">Timiebi Aganaba-Jeanty</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate and transparent monitoring, reporting and verification of Greenhouse Gas (GHG) emissions and removals is a requirement of the United Nations Framework Convention on Climate Change (UNFCCC). Several countries are obligated to prepare and submit an annual national greenhouse gas inventory covering anthropogenic emissions by sources and removals by sinks, subject to a review conducted by an international team of experts. However, the process is not without flaws. The self-reporting varies enormously in thoroughness, frequency and accuracy including inconsistency in the way such reporting occurs. The world’s space agencies are calling for a new generation of satellites that would be precise enough to map greenhouse gas emissions from individual nations. The plan is delicate politically because the global system could verify or cast doubt on emission reports from the member states of the UNFCCC. A level playing field is required and an idea that an international system should be perceived as an instrument to facilitate fairness and equality rather than to spy on or punish. This change of perspective is required to get buy in for an international verification system. The research proposes the viability of a satellite system that provides independent access to data regarding greenhouse gas emissions and the policy and governance implications of its potential use as a monitoring and verification system for the Paris Agreement. It assesses the foundations of the reporting monitoring and verification system as proposed in Paris and analyzes this in light of a proposed satellite system. The use of remote sensing technology has been debated for verification purposes and as evidence in courts but this is not without controversy. Lessons can be learned from its use in this context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=greenhouse%20gas%20emissions" title="greenhouse gas emissions">greenhouse gas emissions</a>, <a href="https://publications.waset.org/abstracts/search?q=reporting" title=" reporting"> reporting</a>, <a href="https://publications.waset.org/abstracts/search?q=monitoring%20and%20verification" title=" monitoring and verification"> monitoring and verification</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite" title=" satellite"> satellite</a>, <a href="https://publications.waset.org/abstracts/search?q=UNFCCC" title=" UNFCCC"> UNFCCC</a> </p> <a href="https://publications.waset.org/abstracts/57507/satellite-technology-usage-for-greenhouse-gas-emissions-monitoring-and-verification-policy-considerations-for-an-international-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57507.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">286</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=23">23</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=24">24</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=speaker%20verification&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>