CINXE.COM
Search results for: voice recognition
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: voice recognition</title> <meta name="description" content="Search results for: voice recognition"> <meta name="keywords" content="voice recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="voice recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="voice recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2135</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: voice recognition</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2135</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extreme%20learning" title="extreme learning">extreme learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a> </p> <a href="https://publications.waset.org/abstracts/112384/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2134</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/42384/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2133</span> Advanced Mouse Cursor Control and Speech Recognition Module</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prasad%20Kalagura">Prasad Kalagura</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Veeresh%20kumar"> B. Veeresh kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We constructed an interface system that would allow a similarly paralyzed user to interact with a computer with almost full functional capability. A real-time tracking algorithm is implemented based on adaptive skin detection and motion analysis. The clicking of the mouse is activated by the user's eye blinking through a sensor. The keyboard function is implemented by voice recognition kit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=embedded%20ARM7%20processor" title="embedded ARM7 processor">embedded ARM7 processor</a>, <a href="https://publications.waset.org/abstracts/search?q=mouse%20pointer%20control" title=" mouse pointer control"> mouse pointer control</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition "> voice recognition </a> </p> <a href="https://publications.waset.org/abstracts/31757/advanced-mouse-cursor-control-and-speech-recognition-module" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31757.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">578</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2132</span> Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Krishna%20Mohan%20Bathula">Krishna Mohan Bathula</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatou%20Bintou%20Loucoubar"> Fatou Bintou Loucoubar</a>, <a href="https://publications.waset.org/abstracts/search?q=FNU%20Kaleemunnisa"> FNU Kaleemunnisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Christelle%20Scharff"> Christelle Scharff</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Anthony%20De%20Castro"> Mark Anthony De Castro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20voice%20response" title=" interactive voice response"> interactive voice response</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20response%20recognition" title=" voice response recognition"> voice response recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=wolof%20word%20classification" title=" wolof word classification"> wolof word classification</a> </p> <a href="https://publications.waset.org/abstracts/150305/wolof-voice-response-recognition-system-a-deep-learning-model-for-wolof-audio-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150305.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2131</span> Integrated Gesture and Voice-Activated Mouse Control System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dev%20Pratap%20Singh">Dev Pratap Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshika%20Hasija"> Harshika Hasija</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashwini%20S."> Ashwini S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computers using hand gestures and voice commands. The system leverages advanced computer vision techniques using the Media Pipe framework and OpenCV to detect and interpret real-time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the speech recognition library allows for seamless execution of tasks like web searches, location navigation, and gesture control in the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20assistant" title=" voice assistant"> voice assistant</a> </p> <a href="https://publications.waset.org/abstracts/193896/integrated-gesture-and-voice-activated-mouse-control-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">10</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2130</span> Voice Commands Recognition of Mentor Robot in Noisy Environment Using HTK</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khenfer-Koummich%20Fatma">Khenfer-Koummich Fatma</a>, <a href="https://publications.waset.org/abstracts/search?q=Hendel%20Fatiha"> Hendel Fatiha</a>, <a href="https://publications.waset.org/abstracts/search?q=Mesbahi%20Larbi"> Mesbahi Larbi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> this paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a man-machine interface with a voice recognition system that allows the operator to tele-operate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands spoken in two languages: French and Arabic. The recognition rate obtained is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equal to 30 db, the Arabic speech recognition rate is 69% and 80% for French speech recognition rate. This can be explained by the ability of phonetic context of each speech when the noise is added. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20command" title="voice command">voice command</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM" title=" HMM"> HMM</a>, <a href="https://publications.waset.org/abstracts/search?q=TIMIT" title=" TIMIT"> TIMIT</a>, <a href="https://publications.waset.org/abstracts/search?q=noise" title=" noise"> noise</a>, <a href="https://publications.waset.org/abstracts/search?q=HTK" title=" HTK"> HTK</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic" title=" Arabic"> Arabic</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a> </p> <a href="https://publications.waset.org/abstracts/24454/voice-commands-recognition-of-mentor-robot-in-noisy-environment-using-htk" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24454.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2129</span> Recognition of Voice Commands of Mentor Robot in Noisy Environment Using Hidden Markov Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khenfer%20Koummich%20Fatma">Khenfer Koummich Fatma</a>, <a href="https://publications.waset.org/abstracts/search?q=Hendel%20Fatiha"> Hendel Fatiha</a>, <a href="https://publications.waset.org/abstracts/search?q=Mesbahi%20Larbi"> Mesbahi Larbi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a human-machine interface with a voice recognition system that allows the operator to teleoperate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands pronounced in two languages: French and Arabic. The obtained recognition rate is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equals 30 dB, in this case; the Arabic speech recognition rate is 69%, and the French speech recognition rate is 80%. This can be explained by the ability of phonetic context of each speech when the noise is added. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabic%20speech%20recognition" title="Arabic speech recognition">Arabic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Hidden%20Markov%20Model%20%28HMM%29" title=" Hidden Markov Model (HMM)"> Hidden Markov Model (HMM)</a>, <a href="https://publications.waset.org/abstracts/search?q=HTK" title=" HTK"> HTK</a>, <a href="https://publications.waset.org/abstracts/search?q=noise" title=" noise"> noise</a>, <a href="https://publications.waset.org/abstracts/search?q=TIMIT" title=" TIMIT"> TIMIT</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20command" title=" voice command"> voice command</a> </p> <a href="https://publications.waset.org/abstracts/67988/recognition-of-voice-commands-of-mentor-robot-in-noisy-environment-using-hidden-markov-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67988.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2128</span> Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Surbhi%20Mathur">Surbhi Mathur</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20M.%20Vyas"> J. M. Vyas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Although the rapid development of forensic speaker recognition technology has been conducted, there are still many problems to be solved. The biggest problem arises when the cases involving disguised voice samples come across for the purpose of examination and identification. Such type of voice samples of anonymous callers is frequently encountered in crimes involving kidnapping, blackmailing, hoax extortion and many more, where the speaker makes a deliberate effort to manipulate their natural voice in order to conceal their identity due to the fear of being caught. Voice disguise causes serious damage to the natural vocal parameters of the speakers and thus complicates the process of identification. The sole objective of this doctoral project is to find out the possibility of rendering definite opinions in cases involving disguised speech by experimentally determining the effects of different disguise forms on personal identification and percentage rate of speaker recognition for various voice disguise techniques such as raised pitch, lower pitch, increased nasality, covering the mouth, constricting tract, obstacle in mouth etc by analyzing and comparing the amount of phonetic and acoustic variation in of artificial (disguised) and natural sample of an individual, by auditory as well as spectrographic analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=forensic" title="forensic">forensic</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=voice" title=" voice"> voice</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a>, <a href="https://publications.waset.org/abstracts/search?q=disguise" title=" disguise"> disguise</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a> </p> <a href="https://publications.waset.org/abstracts/47439/acoustic-analysis-for-comparison-and-identification-of-normal-and-disguised-speech-of-individuals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">369</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2127</span> Gesture-Controlled Interface Using Computer Vision and Python</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vedant%20Vardhan%20Rathour">Vedant Vardhan Rathour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anant%20Agrawal"> Anant Agrawal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/193844/gesture-controlled-interface-using-computer-vision-and-python" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2126</span> Effect of Helium and Sulfur Hexafluoride Gas Inhalation on Voice Resonances</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pallavi%20Marathe">Pallavi Marathe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Voice is considered to be a unique biometric property of human beings. Unlike other biometric evidence, for example, fingerprints and retina scans, etc., voice can be easily changed or mimicked. The present paper talks about how the inhalation of helium and sulfur hexafluoride (SF6) gas affects the voice formant frequencies that are the resonant frequencies of the vocal tract. Helium gas is low-density gas; hence, the voice travels with a higher speed than that of air. On the other side in SF6 gas voice travels with lower speed than that of air due to its higher density. These results in decreasing the resonant frequencies of voice in helium and increasing in SF6. Results are presented with the help of Praat software, which is used for voice analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20formants" title="voice formants">voice formants</a>, <a href="https://publications.waset.org/abstracts/search?q=helium" title=" helium"> helium</a>, <a href="https://publications.waset.org/abstracts/search?q=sulfur%20hexafluoride" title=" sulfur hexafluoride"> sulfur hexafluoride</a>, <a href="https://publications.waset.org/abstracts/search?q=gas%20inhalation" title=" gas inhalation"> gas inhalation</a> </p> <a href="https://publications.waset.org/abstracts/115121/effect-of-helium-and-sulfur-hexafluoride-gas-inhalation-on-voice-resonances" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115121.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2125</span> Environmentally Adaptive Acoustic Echo Suppression for Barge-in Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jong%20Han%20Joo">Jong Han Joo</a>, <a href="https://publications.waset.org/abstracts/search?q=Jung%20Hoon%20Lee"> Jung Hoon Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Young%20Sun%20Kim"> Young Sun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jae%20Young%20Kang"> Jae Young Kang</a>, <a href="https://publications.waset.org/abstracts/search?q=Seung%20Ho%20Choi"> Seung Ho Choi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we propose a novel technique for acoustic echo suppression (AES) during speech recognition under barge-in conditions. Conventional AES methods based on spectral subtraction apply fixed weights to the estimated echo path transfer function (EPTF) at the current signal segment and to the EPTF estimated until the previous time interval. We propose a new approach that adaptively updates weight parameters in response to abrupt changes in the acoustic environment due to background noises or double-talk. Furthermore, we devised a voice activity detector and an initial time-delay estimator for barge-in speech recognition in communication networks. The initial time delay is estimated using log-spectral distance measure, as well as cross-correlation coefficients. The experimental results show that the developed techniques can be successfully applied in barge-in speech recognition systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20echo%20suppression" title="acoustic echo suppression">acoustic echo suppression</a>, <a href="https://publications.waset.org/abstracts/search?q=barge-in" title=" barge-in"> barge-in</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=echo%20path%20transfer%20function" title=" echo path transfer function"> echo path transfer function</a>, <a href="https://publications.waset.org/abstracts/search?q=initial%20delay%20estimator" title=" initial delay estimator"> initial delay estimator</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detector" title=" voice activity detector"> voice activity detector</a> </p> <a href="https://publications.waset.org/abstracts/17151/environmentally-adaptive-acoustic-echo-suppression-for-barge-in-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">372</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2124</span> Features Dimensionality Reduction and Multi-Dimensional Voice-Processing Program to Parkinson Disease Discrimination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Djamila%20Meghraoui">Djamila Meghraoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Bachir%20Boudraa"> Bachir Boudraa</a>, <a href="https://publications.waset.org/abstracts/search?q=Thouraya%20Meksen"> Thouraya Meksen</a>, <a href="https://publications.waset.org/abstracts/search?q=M.Boudraa"> M.Boudraa </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Parkinson's disease is a pathology that involves characteristic perturbations in patients’ voices. This paper describes a proposed method that aims to diagnose persons with Parkinson (PWP) by analyzing on line their voices signals. First, Thresholds signals alterations are determined by the Multi-Dimensional Voice Program (MDVP). Principal Analysis (PCA) is exploited to select the main voice principal componentsthat are significantly affected in a patient. The decision phase is realized by a Mul-tinomial Bayes (MNB) Classifier that categorizes an analyzed voice in one of the two resulting classes: healthy or PWP. The prediction accuracy achieved reaching 98.8% is very promising. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Parkinson%E2%80%99s%20disease%20recognition" title="Parkinson’s disease recognition">Parkinson’s disease recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/abstracts/search?q=MDVP" title=" MDVP"> MDVP</a>, <a href="https://publications.waset.org/abstracts/search?q=multinomial%20Naive%20Bayes" title=" multinomial Naive Bayes"> multinomial Naive Bayes</a> </p> <a href="https://publications.waset.org/abstracts/59181/features-dimensionality-reduction-and-multi-dimensional-voice-processing-program-to-parkinson-disease-discrimination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59181.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2123</span> Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=LuoJiaoyang">LuoJiaoyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Hongyang"> Yu Hongyang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=three%20modalities" title=" three modalities"> three modalities</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D" title=" RGB-D"> RGB-D</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20verification" title=" identity verification"> identity verification</a> </p> <a href="https://publications.waset.org/abstracts/163265/identity-verification-based-on-multimodal-machine-learning-on-red-green-blue-rgb-red-green-blue-depth-rgb-d-voice-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2122</span> Comparing Sounds of the Singing Voice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christel%20Elisabeth%20Bonin">Christel Elisabeth Bonin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This experiment aims at showing that classical singing and belting have both different singing qualities, but singing with a speaking voice has no singing quality. For this purpose, a singing female voice was recorded on four different tone pitches, singing the vowel ‘a’ by using 3 different kinds of singing - classical trained voice, belting voice and speaking voice. The recordings have been entered in the Software Praat. Then the formants of each recorded tone were compared to each other and put in relationship to the singer’s formant. The visible results are taken as an indicator of comparable sound qualities of a classical trained female voice and a belting female voice concerning the concentration of overtones in F1 to F5 and a lack of sound quality in the speaking voice for singing purpose. The results also show that classical singing and belting are both valuable vocal techniques for singing due to their richness of overtones and that belting is not comparable to shouting or screaming. Singing with a speaking voice in contrast should not be called singing due to the lack of overtones which means by definition that there is no musical tone. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=formants" title="formants">formants</a>, <a href="https://publications.waset.org/abstracts/search?q=overtone" title=" overtone"> overtone</a>, <a href="https://publications.waset.org/abstracts/search?q=singer%E2%80%99s%20formant" title=" singer’s formant"> singer’s formant</a>, <a href="https://publications.waset.org/abstracts/search?q=singing%20voice" title=" singing voice"> singing voice</a>, <a href="https://publications.waset.org/abstracts/search?q=belting" title=" belting"> belting</a>, <a href="https://publications.waset.org/abstracts/search?q=classical%20singing" title=" classical singing"> classical singing</a>, <a href="https://publications.waset.org/abstracts/search?q=singing%20with%20the%20speaking%20voice" title=" singing with the speaking voice"> singing with the speaking voice</a> </p> <a href="https://publications.waset.org/abstracts/41946/comparing-sounds-of-the-singing-voice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41946.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2121</span> Biometric Recognition Techniques: A Survey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shabir%20Ahmad%20Sofi">Shabir Ahmad Sofi</a>, <a href="https://publications.waset.org/abstracts/search?q=Shubham%20Aggarwal"> Shubham Aggarwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanyam%20Singhal"> Sanyam Singhal</a>, <a href="https://publications.waset.org/abstracts/search?q=Roohie%20Naaz"> Roohie Naaz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric recognition refers to an automatic recognition of individuals based on a feature vector(s) derived from their physiological and/or behavioral characteristic. Biometric recognition systems should provide a reliable personal recognition schemes to either confirm or determine the identity of an individual. These features are used to provide an authentication for computer based security systems. Applications of such a system include computer systems security, secure electronic banking, mobile phones, credit cards, secure access to buildings, health and social services. By using biometrics a person could be identified based on 'who she/he is' rather than 'what she/he has' (card, token, key) or 'what she/he knows' (password, PIN). In this paper, a brief overview of biometric methods, both unimodal and multimodal and their advantages and disadvantages, will be presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric" title="biometric">biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=DNA" title=" DNA"> DNA</a>, <a href="https://publications.waset.org/abstracts/search?q=fingerprint" title=" fingerprint"> fingerprint</a>, <a href="https://publications.waset.org/abstracts/search?q=ear" title=" ear"> ear</a>, <a href="https://publications.waset.org/abstracts/search?q=face" title=" face"> face</a>, <a href="https://publications.waset.org/abstracts/search?q=retina%20scan" title=" retina scan"> retina scan</a>, <a href="https://publications.waset.org/abstracts/search?q=gait" title=" gait"> gait</a>, <a href="https://publications.waset.org/abstracts/search?q=iris" title=" iris"> iris</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=unimodal%20biometric" title=" unimodal biometric"> unimodal biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20biometric" title=" multimodal biometric"> multimodal biometric</a> </p> <a href="https://publications.waset.org/abstracts/15520/biometric-recognition-techniques-a-survey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">756</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2120</span> The Voice Rehabilitation Program Following Ileocolon Flap Transfer for Voice Reconstruction after Laryngectomy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chi-Wen%20Huang">Chi-Wen Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hung-Chi%20Chen"> Hung-Chi Chen </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Total laryngectomy affects swallowing, speech functions and life quality in the head and neck cancer. Voice restoration plays an important role in social activities and communication. Several techniques have been developed for voice restoration and reported to improve the life quality. However, the rehabilitation program for voice reconstruction by using the ileocolon flap still unclear. A retrospective study was done, and the patients' data were drawn from the medical records between 2010 and 2016 who underwent voice reconstruction by ileocolon flap after laryngectomy. All of them were trained to swallow first; then, the voice rehabilitation was started. The outcome of voice was evaluated after 6 months using the 4-point scoring scale. In our result, 9.8% patients could give very clear voice so everyone could understand their speech, 61% patients could be understood well by families and friends, 20.2% patients could only talk with family, and 9% patients had difficulty to be understood. Moreover, the 57% patients did not need a second surgery, but in 43% patients voice was made clear by a second surgery. In this study, we demonstrated that the rehabilitation program after voice reconstruction with ileocolon flap for post-laryngectomy patients is important because the anatomical structure is different from the normal larynx. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=post-laryngectomy" title="post-laryngectomy">post-laryngectomy</a>, <a href="https://publications.waset.org/abstracts/search?q=ileocolon%20flap" title=" ileocolon flap"> ileocolon flap</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20reconstruction" title=" voice reconstruction"> voice reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/87060/the-voice-rehabilitation-program-following-ileocolon-flap-transfer-for-voice-reconstruction-after-laryngectomy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87060.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">157</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2119</span> Patient-Friendly Hand Gesture Recognition Using AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Prabhu">K. Prabhu</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Dinesh"> K. Dinesh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Ranjani"> M. Ranjani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Suhitha"> M. Suhitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the five gestures will be detected when shown with their hands via the webcam, which is placed for gesture detection. The personal computer is used for displaying the gestures and for running the code in the raspberry pi imager. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nodeMCU" title="nodeMCU">nodeMCU</a>, <a href="https://publications.waset.org/abstracts/search?q=AI%20technology" title=" AI technology"> AI technology</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture" title=" gesture"> gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=patient" title=" patient"> patient</a> </p> <a href="https://publications.waset.org/abstracts/144943/patient-friendly-hand-gesture-recognition-using-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2118</span> Developed Text-Independent Speaker Verification System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Arif">Mohammed Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdessalam%20Kifouche"> Abdessalam Kifouche</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech is a very convenient way of communication between people and machines. It conveys information about the identity of the talker. Since speaker recognition technology is increasingly securing our everyday lives, the objective of this paper is to develop two automatic text-independent speaker verification systems (TI SV) using low-level spectral features and machine learning methods. (i) The first system is based on a support vector machine (SVM), which was widely used in voice signal processing with the aim of speaker recognition involving verifying the identity of the speaker based on its voice characteristics, and (ii) the second is based on Gaussian Mixture Model (GMM) and Universal Background Model (UBM) to combine different functions from different resources to implement the SVM based. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speaker%20verification" title="speaker verification">speaker verification</a>, <a href="https://publications.waset.org/abstracts/search?q=text-independent" title=" text-independent"> text-independent</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=cepstral%20analysis" title=" cepstral analysis"> cepstral analysis</a> </p> <a href="https://publications.waset.org/abstracts/183493/developed-text-independent-speaker-verification-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">58</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2117</span> The Effect of the Hemispheres of the Brain and the Tone of Voice on Persuasion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rica%20Jell%20de%20Laza">Rica Jell de Laza</a>, <a href="https://publications.waset.org/abstracts/search?q=Jose%20Alberto%20Fernandez"> Jose Alberto Fernandez</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrea%20Marie%20Mendoza"> Andrea Marie Mendoza</a>, <a href="https://publications.waset.org/abstracts/search?q=Qristin%20Jeuel%20Regalado"> Qristin Jeuel Regalado</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates whether participants experience different levels of persuasion depending on the hemisphere of the brain and the tone of voice. The experiment was performed on 96 volunteer undergraduate students taking an introductory course in psychology. The participants took part in a 2 x 3 (Hemisphere: left, right x Tone of Voice: positive, neutral, negative) Mixed Factorial Design to measure how much a person was persuaded. Results showed that the hemisphere of the brain and the tone of voice used did not significantly affect the results individually. Furthermore, there was no interaction effect. Therefore, the hemispheres of the brain and the tone of voice employed play insignificant roles in persuading a person. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dichotic%20listening" title="dichotic listening">dichotic listening</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20hemisphere" title=" brain hemisphere"> brain hemisphere</a>, <a href="https://publications.waset.org/abstracts/search?q=tone%20of%20voice" title=" tone of voice"> tone of voice</a>, <a href="https://publications.waset.org/abstracts/search?q=persuasion" title=" persuasion "> persuasion </a> </p> <a href="https://publications.waset.org/abstracts/62379/the-effect-of-the-hemispheres-of-the-brain-and-the-tone-of-voice-on-persuasion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62379.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">307</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2116</span> Automatic Speech Recognition Systems Performance Evaluation Using Word Error Rate Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jo%C3%A3o%20Rato">João Rato</a>, <a href="https://publications.waset.org/abstracts/search?q=Nuno%20Costa"> Nuno Costa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The human verbal communication is a two-way process which requires a mutual understanding that will result in some considerations. This kind of communication, also called dialogue, besides the supposed human agents it can also be performed between human agents and machines. The interaction between Men and Machines, by means of a natural language, has an important role concerning the improvement of the communication between each other. Aiming at knowing the performance of some speech recognition systems, this document shows the results of the accomplished tests according to the Word Error Rate evaluation method. Besides that, it is also given a set of information linked to the systems of Man-Machine communication. After this work has been made, conclusions were drawn regarding the Speech Recognition Systems, among which it can be mentioned their poor performance concerning the voice interpretation in noisy environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=man-machine%20conversation" title=" man-machine conversation"> man-machine conversation</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=spoken%20dialogue%20systems" title=" spoken dialogue systems"> spoken dialogue systems</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20error%20rate" title=" word error rate"> word error rate</a> </p> <a href="https://publications.waset.org/abstracts/62274/automatic-speech-recognition-systems-performance-evaluation-using-word-error-rate-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62274.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2115</span> Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyung-Jin%20Kim">Hyung-Jin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dae-Wan%20Kim"> Dae-Wan Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Moo-Yeon%20Lee"> Moo-Yeon Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heat%20transfer" title="heat transfer">heat transfer</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20coil" title=" voice coil"> voice coil</a>, <a href="https://publications.waset.org/abstracts/search?q=woofer%20speaker" title=" woofer speaker"> woofer speaker</a> </p> <a href="https://publications.waset.org/abstracts/5142/experimental-study-on-the-heat-transfer-characteristics-of-the-200w-class-woofer-speaker" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2114</span> The Functions of the Student Voice and Student-Centred Teaching Practices in Classroom-Based Music Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Douklia">Sofia Douklia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present context paper aims to present the important role of ‘student voice’ and the music teacher in the classroom, which contributes to more student-centered music education. The aim is to focus on the functions of the student voice through the music spectrum, which has been born in the music classroom, and the teacher’s methodologies and techniques used in the music classroom. The music curriculum, the principles of student-centered music education, and the role of students and teachers as music ambassadors have been considered the major music parameters of student voice. The student- voice is a worth-mentioning aspect of a student-centered education, and all teachers should consider and promote its existence in their classroom. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=student%27s%20voice" title="student's voice">student's voice</a>, <a href="https://publications.waset.org/abstracts/search?q=student-centered%20education" title=" student-centered education"> student-centered education</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20ambassadors" title=" music ambassadors"> music ambassadors</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20teachers" title=" music teachers"> music teachers</a> </p> <a href="https://publications.waset.org/abstracts/155423/the-functions-of-the-student-voice-and-student-centred-teaching-practices-in-classroom-based-music-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155423.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2113</span> Unsupervised Reciter Recognition Using Gaussian Mixture Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Alwosheel">Ahmad Alwosheel</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Alqaraawi"> Ahmed Alqaraawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work proposes an unsupervised text-independent probabilistic approach to recognize Quran reciter voice. It is an accurate approach that works on real time applications. This approach does not require a prior information about reciter models. It has two phases, where in the training phase the reciters' acoustical features are modeled using Gaussian Mixture Models, while in the testing phase, unlabeled reciter's acoustical features are examined among GMM models. Using this approach, a high accuracy results are achieved with efficient computation time process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Quran" title="Quran">Quran</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=reciter%20recognition" title=" reciter recognition"> reciter recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20Mixture%20Model" title=" Gaussian Mixture Model"> Gaussian Mixture Model</a> </p> <a href="https://publications.waset.org/abstracts/46532/unsupervised-reciter-recognition-using-gaussian-mixture-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2112</span> Voice over IP Quality of Service Evaluation for Mobile Ad Hoc Network in an Indoor Environment for Different Voice Codecs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lina%20Abou%20Haibeh">Lina Abou Haibeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadir%20Hakem"> Nadir Hakem</a>, <a href="https://publications.waset.org/abstracts/search?q=Ousama%20Abu%20Safia"> Ousama Abu Safia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the performance and quality of Voice over IP (VoIP) calls carried over a Mobile Ad Hoc Network (MANET) which has a number of SIP nodes registered on a SIP Proxy are analyzed. The testing campaigns are carried out in an indoor corridor structure having a well-defined channel’s characteristics and model for the different voice codecs, G.711, G.727 and G.723.1. These voice codecs are commonly used in VoIP technology. The calls’ quality are evaluated using four Quality of Service (QoS) metrics, namely, mean opinion score (MOS), jitter, delay, and packet loss. The relationship between the wireless channel’s parameters and the optimum codec is well-established. According to the experimental results, the voice codec G.711 has the best performance for the proposed MANET topology <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20channel%20modelling" title="wireless channel modelling">wireless channel modelling</a>, <a href="https://publications.waset.org/abstracts/search?q=Voip" title=" Voip"> Voip</a>, <a href="https://publications.waset.org/abstracts/search?q=MANET" title=" MANET"> MANET</a>, <a href="https://publications.waset.org/abstracts/search?q=session%20initiation%20protocol%20%28SIP%29" title=" session initiation protocol (SIP)"> session initiation protocol (SIP)</a>, <a href="https://publications.waset.org/abstracts/search?q=QoS" title=" QoS"> QoS</a> </p> <a href="https://publications.waset.org/abstracts/74102/voice-over-ip-quality-of-service-evaluation-for-mobile-ad-hoc-network-in-an-indoor-environment-for-different-voice-codecs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">228</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2111</span> Reconceptualising the Voice of Children in Child Protection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sharon%20Jackson">Sharon Jackson</a>, <a href="https://publications.waset.org/abstracts/search?q=Lynn%20Kelly"> Lynn Kelly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a conceptual review of the interdisciplinary literature which has theorised the concept of ‘children’s voices’. The primary aim is to identify and consider the theoretical relevance of conceptual thought on ‘children’s voices’ for research and practice in child protection contexts. Attending to the ‘voice of the child’ has become a core principle of social work practice in contemporary child protection contexts. Discourses of voice permeate the legislative, policy and practice frameworks of child protection practices within the UK and internationally. Voice is positioned within a ‘child-centred’ moral imperative to ‘hear the voices’ of children and take their preferences and perspectives into account. This practice is now considered to be central to working in a child-centered way. The genesis of this call to voice is revealed through sociological analysis of twentieth-century child welfare reform as rooted inter alia in intersecting political, social and cultural discourses which have situated children and childhood as cites of state intervention as enshrined in the 1989 United Nations Convention on the Rights of the Child ratified by the UK government in 1991 and more specifically Article 12 of the convention. From a policy and practice perspective, the professional ‘capturing’ of children’s voices has come to saturate child protection practice. This has incited a stream of directives, resources, advisory publications and ‘how-to’ guides which attempt to articulate practice methods to ‘listen’, ‘hear’ and above all – ‘capture’ the ‘voice of the child’. The idiom ‘capturing the voice of the child’ is frequently invoked within the literature to express the requirements of the child-centered practice task to be accomplished. Despite the centrality of voice, and an obsession with ‘capturing’ voices, evidence from research, inspection processes, serious case reviews, child abuse and death inquires has consistently highlighted professional neglect of ‘the voice of the child’. Notable research studies have highlighted the relative absence of the child’s voice in social work assessment practices, a troubling lack of meaningful engagement with children and the need to more thoroughly examine communicative practices in child protection contexts. As a consequence, the project of capturing ‘the voice of the child’ has intensified, and there has been an increasing focus on developing methods and professional skills to attend to voice. This has been guided by a recognition that professionals often lack the skills and training to engage with children in age-appropriate ways. We argue however that the problem with ‘capturing’ and [re]representing ‘voice’ in child protection contexts is, more fundamentally, a failure to adequately theorise the concept of ‘voice’ in the ‘voice of the child’. For the most part, ‘The voice of the child’ incorporates psychological conceptions of child development. While these concepts are useful in the context of direct work with children, they fail to consider other strands of sociological thought, which position ‘the voice of the child’ within an agentic paradigm to emphasise the active agency of the child. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=child-centered" title="child-centered">child-centered</a>, <a href="https://publications.waset.org/abstracts/search?q=child%20protection" title=" child protection"> child protection</a>, <a href="https://publications.waset.org/abstracts/search?q=views%20of%20the%20child" title=" views of the child"> views of the child</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20of%20the%20child" title=" voice of the child"> voice of the child</a> </p> <a href="https://publications.waset.org/abstracts/86527/reconceptualising-the-voice-of-children-in-child-protection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86527.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2110</span> On Voice in English: An Awareness Raising Attempt on Passive Voice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Meral%20Melek%20Unver">Meral Melek Unver</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to explore ways to help English as a Foreign Language (EFL) learners notice and revise voice in English and raise their awareness of when and how to use active and passive voice to convey meaning in their written and spoken work. Because passive voice is commonly preferred in certain genres such as academic essays and news reports, despite the current trends promoting active voice, it is essential for learners to be fully aware of the meaning, use and form of passive voice to better communicate. The participants in the study are 22 EFL learners taking a one-year intensive English course at a university, who will receive English medium education (EMI) in their departmental studies in the following academic year. Data from students’ written and oral work was collected over a four-week period and the misuse or inaccurate use of passive voice was identified. The analysis of the data proved that they failed to make sensible decisions about when and how to use passive voice partly because the differences between their mother tongue and English and because they were not aware of the fact that active and passive voice would not alternate all the time. To overcome this, a Test-Teach-Test shape lesson, as opposed to a Present-Practice-Produce shape lesson, was designed and implemented to raise their awareness of the decisions they needed to make in choosing the voice and help them notice the meaning and use of passive voice through concept checking questions. The results first suggested that awareness raising activities on the meaning and use of voice in English would be beneficial in having accurate and meaningful outcomes from students. Also, helping students notice and renotice passive voice through carefully designed activities would help them internalize the use and form of it. As a result of the study, a number of activities are suggested to revise and notice passive voice as well as a short questionnaire to help EFL teachers to self-reflect on their teaching. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20in%20English" title="voice in English">voice in English</a>, <a href="https://publications.waset.org/abstracts/search?q=test-teach-test" title=" test-teach-test"> test-teach-test</a>, <a href="https://publications.waset.org/abstracts/search?q=passive%20voice" title=" passive voice"> passive voice</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20language%20teaching" title=" English language teaching "> English language teaching </a> </p> <a href="https://publications.waset.org/abstracts/71895/on-voice-in-english-an-awareness-raising-attempt-on-passive-voice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71895.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">222</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2109</span> Phone Number Spoofing Attack in VoLTE 4G</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joo-Hyung%20Oh">Joo-Hyung Oh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The number of service users of 4G VoLTE (voice over LTE) using LTE data networks is rapidly growing. VoLTE based on all-IP network enables clearer and higher-quality voice calls than 3G. It does, however, pose new challenges; a voice call through IP networks makes it vulnerable to security threats such as wiretapping and forged or falsified information. And in particular, stealing other users’ phone numbers and forging or falsifying call request messages from outgoing voice calls within VoLTE result in considerable losses that include user billing and voice phishing to acquaintances. This paper focuses on the threats of caller phone number spoofing in the VoLTE and countermeasure technology as safety measures for mobile communication networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LTE" title="LTE">LTE</a>, <a href="https://publications.waset.org/abstracts/search?q=4G" title=" 4G"> 4G</a>, <a href="https://publications.waset.org/abstracts/search?q=VoLTE" title=" VoLTE"> VoLTE</a>, <a href="https://publications.waset.org/abstracts/search?q=phone%20number%20spoofing" title=" phone number spoofing"> phone number spoofing</a> </p> <a href="https://publications.waset.org/abstracts/22199/phone-number-spoofing-attack-in-volte-4g" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22199.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2108</span> Voice Signal Processing and Coding in MATLAB Generating a Plasma Signal in a Tesla Coil for a Security System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20Jimenez">Juan Jimenez</a>, <a href="https://publications.waset.org/abstracts/search?q=Erika%20Yambay"> Erika Yambay</a>, <a href="https://publications.waset.org/abstracts/search?q=Dayana%20Pilco"> Dayana Pilco</a>, <a href="https://publications.waset.org/abstracts/search?q=Brayan%20Parra"> Brayan Parra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an investigation of voice signal processing and coding using MATLAB, with the objective of generating a plasma signal on a Tesla coil within a security system. The approach focuses on using advanced voice signal processing techniques to encode and modulate the audio signal, which is then amplified and applied to a Tesla coil. The result is the creation of a striking visual effect of voice-controlled plasma with specific applications in security systems. The article explores the technical aspects of voice signal processing, the generation of the plasma signal, and its relationship to security. The implications and creative potential of this technology are discussed, highlighting its relevance at the forefront of research in signal processing and visual effect generation in the field of security systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20signal%20processing" title="voice signal processing">voice signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20signal%20coding" title=" voice signal coding"> voice signal coding</a>, <a href="https://publications.waset.org/abstracts/search?q=MATLAB" title=" MATLAB"> MATLAB</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20signal" title=" plasma signal"> plasma signal</a>, <a href="https://publications.waset.org/abstracts/search?q=Tesla%20coil" title=" Tesla coil"> Tesla coil</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20system" title=" security system"> security system</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20effects" title=" visual effects"> visual effects</a>, <a href="https://publications.waset.org/abstracts/search?q=audiovisual%20interaction" title=" audiovisual interaction"> audiovisual interaction</a> </p> <a href="https://publications.waset.org/abstracts/170828/voice-signal-processing-and-coding-in-matlab-generating-a-plasma-signal-in-a-tesla-coil-for-a-security-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170828.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2107</span> Phone Number Spoofing Attack in VoLTE</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joo-Hyung%20Oh">Joo-Hyung Oh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sekwon%20Kim"> Sekwon Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Myoungsun%20Noh"> Myoungsun Noh</a>, <a href="https://publications.waset.org/abstracts/search?q=Chaetae%20Im"> Chaetae Im</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The number of service users of 4G VoLTE (voice over LTE) using LTE data networks is rapidly growing. VoLTE based on All-IP network enables clearer and higher-quality voice calls than 3G. It does, however, pose new challenges; a voice call through IP networks makes it vulnerable to security threats such as wiretapping and forged or falsified information. Moreover, in particular, stealing other users’ phone numbers and forging or falsifying call request messages from outgoing voice calls within VoLTE result in considerable losses that include user billing and voice phishing to acquaintances. This paper focuses on the threats of caller phone number spoofing in the VoLTE and countermeasure technology as safety measures for mobile communication networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LTE" title="LTE">LTE</a>, <a href="https://publications.waset.org/abstracts/search?q=4G" title=" 4G"> 4G</a>, <a href="https://publications.waset.org/abstracts/search?q=VoLTE" title=" VoLTE"> VoLTE</a>, <a href="https://publications.waset.org/abstracts/search?q=phone%20number%20spoofing" title=" phone number spoofing"> phone number spoofing</a> </p> <a href="https://publications.waset.org/abstracts/21718/phone-number-spoofing-attack-in-volte" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21718.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">522</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2106</span> The Effect of Voice Recognition Dictation Software on Writing Quality in Third Grade Students: An Action Research Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Timothy%20J.%20Grebec">Timothy J. Grebec</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated whether using a voice dictation software program (i.e., Google Voice Typing) has an impact on student writing quality. The research took place in a third-grade general education classroom in a suburban school setting. Because the study involved minors, all data was encrypted and deidentified before analysis. The students completed a series of writings prior to the beginning of the intervention to determine their thoughts and skill level with writing. During the intervention phase, the students were introduced to the voice dictation software, given an opportunity to practice using it, and then assigned writing prompts to be completed using the software. The prompts written by nineteen student participants and surveys of student opinions on writing established a baseline for the study. The data showed that using the dictation software resulted in a 34% increase in the response quality (compared to the Pennsylvania State Standardized Assessment [PSSA] writing guidelines). Of particular interest was the increase in students' proficiency in demonstrating mastery of the English language and conventions and elaborating on the content. Although this type of research is relatively no, it has the potential to reshape the strategies educators have at their disposal when instructing students on written language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=educational%20technology" title="educational technology">educational technology</a>, <a href="https://publications.waset.org/abstracts/search?q=accommodations" title=" accommodations"> accommodations</a>, <a href="https://publications.waset.org/abstracts/search?q=students%20with%20disabilities" title=" students with disabilities"> students with disabilities</a>, <a href="https://publications.waset.org/abstracts/search?q=writing%20instruction" title=" writing instruction"> writing instruction</a>, <a href="https://publications.waset.org/abstracts/search?q=21st%20century%20education" title=" 21st century education"> 21st century education</a> </p> <a href="https://publications.waset.org/abstracts/170286/the-effect-of-voice-recognition-dictation-software-on-writing-quality-in-third-grade-students-an-action-research-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170286.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=71">71</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=72">72</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=voice%20recognition&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>