CINXE.COM

Search results for: audio lingual method

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: audio lingual method</title> <meta name="description" content="Search results for: audio lingual method"> <meta name="keywords" content="audio lingual method"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="audio lingual method" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="audio lingual method"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 19289</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: audio lingual method</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19289</span> Audio-Lingual Method and the English-Speaking Proficiency of Grade 11 Students</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marthadale%20Acibo%20Semacio">Marthadale Acibo Semacio</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaking skill is a crucial part of English language teaching and learning. This actually shows the great importance of this skill in English language classes. Through speaking, ideas and thoughts are shared with other people, and a smooth interaction between people takes place. The study examined the levels of speaking proficiency of the control and experimental groups on pronunciation, grammatical accuracy, and fluency. As a quasi-experimental study, it also determined the presence or absence of significant changes in their speaking proficiency levels in terms of pronouncing the words correctly, the accuracy of grammar and fluency of a language given the two methods to the groups of students in the English language, using the traditional and audio-lingual methods. Descriptive and inferential statistics were employed according to the stated specific problems. The study employed a video presentation with prior information about it. In the video, the teacher acts as model one, giving instructions on what is going to be done, and then the students will perform the activity. The students were paired purposively based on their learning capabilities. Observing proper ethics, their performance was audio recorded to help the researcher assess the learner using the modified speaking rubric. The study revealed that those under the traditional method were more fluent than those in the audio-lingual method. With respect to the way in which each method deals with the feelings of the student, the audio-lingual one fails to provide a principle that would relate to this area and follows the assumption that the intrinsic motivation of the students to learn the target language will spring from their interest in the structure of the language. However, the speaking proficiency levels of the students were remarkably reinforced in reading different words through the aid of aural media with their teachers. The study concluded that using an audio-lingual method of teaching is not a stand-alone method but only an aid of the teacher in helping the students improve their speaking proficiency in the English Language. Hence, audio-lingual approach is encouraged to be used in teaching English language, on top of the chalk-talk or traditional method, to improve the speaking proficiency of students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio-lingual" title="audio-lingual">audio-lingual</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking" title=" speaking"> speaking</a>, <a href="https://publications.waset.org/abstracts/search?q=grammar" title=" grammar"> grammar</a>, <a href="https://publications.waset.org/abstracts/search?q=pronunciation" title=" pronunciation"> pronunciation</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=fluency" title=" fluency"> fluency</a>, <a href="https://publications.waset.org/abstracts/search?q=proficiency" title=" proficiency"> proficiency</a> </p> <a href="https://publications.waset.org/abstracts/161963/audio-lingual-method-and-the-english-speaking-proficiency-of-grade-11-students" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19288</span> Teaching Speaking Skills to Adult English Language Learners through ALM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wichuda%20Kunnu">Wichuda Kunnu</a>, <a href="https://publications.waset.org/abstracts/search?q=Aungkana%20Sukwises"> Aungkana Sukwises</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Audio-lingual method (ALM) is a teaching approach that is claimed that ineffective for teaching second/foreign languages. Because some linguists and second/foreign language teachers believe that ALM is a rote learning style. However, this study is done on a belief that ALM will be able to solve Thais’ English speaking problem. This paper aims to report the findings on teaching English speaking to adult learners with an “adapted ALM”, one distinction of which is to use Thai as the medium language of instruction. The participants are consisted of 9 adult learners. They were allowed to speak English more freely using both the materials presented in the class and their background knowledge of English. At the end of the course, they spoke English more fluently, more confidently, to the extent that they applied what they learnt both in and outside the class. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=teaching%20English" title="teaching English">teaching English</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method" title=" audio lingual method"> audio lingual method</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20science" title=" cognitive science"> cognitive science</a>, <a href="https://publications.waset.org/abstracts/search?q=psychology" title=" psychology"> psychology</a> </p> <a href="https://publications.waset.org/abstracts/12355/teaching-speaking-skills-to-adult-english-language-learners-through-alm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12355.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19287</span> An Anatomic Approach to the Lingual Artery in the Carotid Triangle in South Indian Population </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashwin%20Rai">Ashwin Rai</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajalakshmi%20Rai"> Rajalakshmi Rai</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajanigandha%20%20Vadgoankar"> Rajanigandha Vadgoankar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lingual artery is the chief artery of the tongue and the neighboring structures pertaining to the oral cavity. At the carotid triangle, this artery arises from the external carotid artery opposite to the tip of greater cornua of hyoid bone, undergoes a tortuous course with its first part being crossed by the hypoglossal nerve and runs beneath the digastric muscle. Then it continues to supply the tongue as the deep lingual artery. The aim of this study is to draw surgeon's attention to the course of lingual artery in this area since it can be accidentally lesioned causing an extensive hemorrhage in certain surgical or dental procedures. The study was conducted on 44 formalin fixed head and neck specimens focusing on the anatomic relations of lingual artery. In this study, we found that the lingual artery is located inferior to the digastric muscle and the hypoglossal nerve contradictory to the classical description. This data would be useful during ligation of lingual artery to avoid injury to the hypoglossal nerve in surgeries related to the anterior triangle of neck. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anterior%20triangle" title="anterior triangle">anterior triangle</a>, <a href="https://publications.waset.org/abstracts/search?q=digastric%20muscle" title=" digastric muscle"> digastric muscle</a>, <a href="https://publications.waset.org/abstracts/search?q=hypoglossal%20nerve" title=" hypoglossal nerve"> hypoglossal nerve</a>, <a href="https://publications.waset.org/abstracts/search?q=lingual%20artery" title=" lingual artery"> lingual artery</a> </p> <a href="https://publications.waset.org/abstracts/78096/an-anatomic-approach-to-the-lingual-artery-in-the-carotid-triangle-in-south-indian-population" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19286</span> Ultrastructure of the Tongue of the African Beauty Snake Psammophis sibilans</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20M.%20A.%20Abumandour">Mohamed M. A. Abumandour</a>, <a href="https://publications.waset.org/abstracts/search?q=Neveen%20E.%20R.%20El-Bakary"> Neveen E. R. El-Bakary</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present work performed on the six tongues of African Beauty snake (Psammophis sibilans) that were obtained immediately after their catching, from agricultural fields, Desouk city, Kafrelsheikh Governorate, Egypt. These collected snakes should be from any oral abnormalities or injuries. The lingual surface of the Psammophis sibilans was studied by scanning electron microscopy (SEM). The surface of the bifurcated apex was smoother than the lingual body. The median lingual sulcus was deep and contained a number of the taste pores. By the high magnification of SEM of each part of a bifurcated area of the lingual apex have numerous taste buds and no lingual papillae were observed. A few numbers of papillae were observed in the lingual body. The microridges and microvilli distributed in the lingual body helped in spreading of mucus over the epithelial surface. Taste pores and papillae in the tongue indicate the presence of a direct chemo-sensory function for the tongue of these snakes as the chemicals dissolved in the mucus then transferred to Jacobson organ. To conclude, the bifurcation appearance of the snake lingual tip act as a chemical or edge detector help in the process named chemo-mechano-reception. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=African%20beauty%20snake" title="African beauty snake">African beauty snake</a>, <a href="https://publications.waset.org/abstracts/search?q=taste%20buds" title=" taste buds"> taste buds</a>, <a href="https://publications.waset.org/abstracts/search?q=taste%20pores" title=" taste pores"> taste pores</a>, <a href="https://publications.waset.org/abstracts/search?q=tongue" title=" tongue"> tongue</a>, <a href="https://publications.waset.org/abstracts/search?q=papillae" title=" papillae"> papillae</a> </p> <a href="https://publications.waset.org/abstracts/111218/ultrastructure-of-the-tongue-of-the-african-beauty-snake-psammophis-sibilans" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19285</span> Audio-Visual Aids and the Secondary School Teaching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shrikrishna%20Mishra">Shrikrishna Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Badri%20Yadav"> Badri Yadav</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this complex society of today where experiences are innumerable and varied, it is not at all possible to present every situation in its original colors hence the opportunities for learning by actual experiences always are not at all possible. It is only through the use of proper audio visual aids that the life situation can be trough in the class room by an enlightened teacher in their simplest form and representing the original to the highest point of similarity which is totally absent in the verbal or lecture method. In the presence of audio aids, the attention is attracted interest roused and suitable atmosphere for proper understanding is automatically created, but in the existing traditional method greater efforts are to be made in order to achieve the aforesaid essential requisite. Inspire of the best and sincere efforts on the side of the teacher the net effect as regards understanding or learning in general is quite negligible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Audio-Visual%20Aids" title="Audio-Visual Aids">Audio-Visual Aids</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20secondary%20school%20teaching" title=" the secondary school teaching"> the secondary school teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=complex%20society" title=" complex society"> complex society</a>, <a href="https://publications.waset.org/abstracts/search?q=audio" title=" audio"> audio</a> </p> <a href="https://publications.waset.org/abstracts/16270/audio-visual-aids-and-the-secondary-school-teaching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16270.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">482</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19284</span> A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jangje%20Park">Jangje Park</a>, <a href="https://publications.waset.org/abstracts/search?q=So%20Young%20Kim"> So Young Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20frequency" title="audio frequency">audio frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=buzz%20noise" title=" buzz noise"> buzz noise</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20bounce" title=" ground bounce"> ground bounce</a>, <a href="https://publications.waset.org/abstracts/search?q=microphone%20grounding" title=" microphone grounding"> microphone grounding</a> </p> <a href="https://publications.waset.org/abstracts/150713/a-study-on-the-improvement-of-mobile-device-call-buzz-noise-caused-by-audio-frequency-ground-bounce" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150713.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19283</span> Cross-Knowledge Graph Relation Completion for Non-Isomorphic Cross-Lingual Entity Alignment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuhong%20Zhang">Yuhong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Lu"> Dan Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chenyang%20Bu"> Chenyang Bu</a>, <a href="https://publications.waset.org/abstracts/search?q=Peipei%20Li"> Peipei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Kui%20Yu"> Kui Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xindong%20Wu"> Xindong Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Cross-Lingual Entity Alignment (CLEA) task aims to find the aligned entities that refer to the same identity from two knowledge graphs (KGs) in different languages. It is an effective way to enhance the performance of data mining for KGs with scarce resources. In real-world applications, the neighborhood structures of the same entities in different KGs tend to be non-isomorphic, which makes the representation of entities contain diverse semantic information and then poses a great challenge for CLEA. In this paper, we try to address this challenge from two perspectives. On the one hand, the cross-KG relation completion rules are designed with the alignment constraint of entities and relations to improve the topology isomorphism of two KGs. On the other hand, a representation method combining isomorphic weights is designed to include more isomorphic semantics for counterpart entities, which will benefit the CLEA. Experiments show that our model can improve the isomorphism of two KGs and the alignment performance, especially for two non-isomorphic KGs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=knowledge%20graphs" title="knowledge graphs">knowledge graphs</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-lingual%20entity%20alignment" title=" cross-lingual entity alignment"> cross-lingual entity alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=non-isomorphic" title=" non-isomorphic"> non-isomorphic</a>, <a href="https://publications.waset.org/abstracts/search?q=relation%20completion" title=" relation completion"> relation completion</a> </p> <a href="https://publications.waset.org/abstracts/155961/cross-knowledge-graph-relation-completion-for-non-isomorphic-cross-lingual-entity-alignment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155961.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19282</span> The Influence of Audio on Perceived Quality of Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Silvio%20Ricardo%20Rodrigues%20Sanches">Silvio Ricardo Rodrigues Sanches</a>, <a href="https://publications.waset.org/abstracts/search?q=Bianca%20Cogo%20Barbosa"> Bianca Cogo Barbosa</a>, <a href="https://publications.waset.org/abstracts/search?q=Beatriz%20Regina%20Brum"> Beatriz Regina Brum</a>, <a href="https://publications.waset.org/abstracts/search?q=Cl%C3%A9ber%20Gimenez%20Corr%C3%AAa"> Cléber Gimenez Corrêa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20substitution" title="background substitution">background substitution</a>, <a href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio" title=" influence of audio"> influence of audio</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20evaluation" title=" segmentation evaluation"> segmentation evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20quality" title=" segmentation quality"> segmentation quality</a> </p> <a href="https://publications.waset.org/abstracts/148456/the-influence-of-audio-on-perceived-quality-of-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148456.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19281</span> Spatial Audio Player Using Musical Genre Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun-Yong%20Lee">Jun-Yong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Gook%20Kim"> Hyoung-Gook Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20equalization" title="automatic equalization">automatic equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=genre%20classification" title=" genre classification"> genre classification</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20segment%20detection" title=" music segment detection"> music segment detection</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20audio%20processing" title=" spatial audio processing"> spatial audio processing</a> </p> <a href="https://publications.waset.org/abstracts/7561/spatial-audio-player-using-musical-genre-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7561.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">429</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19280</span> Mathematical Model That Using Scrambling and Message Integrity Methods in Audio Steganography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Salem%20Atoum">Mohammed Salem Atoum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The success of audio steganography is to ensure imperceptibility of the embedded message in stego file and withstand any form of intentional or un-intentional degradation of message (robustness). Audio steganographic that utilized LSB of audio stream to embed message gain a lot of popularity over the years in meeting the perceptual transparency, robustness and capacity. This research proposes an XLSB technique in order to circumvent the weakness observed in LSB technique. Scrambling technique is introduce in two steps; partitioning the message into blocks followed by permutation each blocks in order to confuse the contents of the message. The message is embedded in the MP3 audio sample. After extracting the message, the permutation codebook is used to re-order it into its original form. Md5sum and SHA-256 are used to verify whether the message is altered or not during transmission. Experimental result shows that the XLSB performs better than LSB. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=XLSB" title="XLSB">XLSB</a>, <a href="https://publications.waset.org/abstracts/search?q=scrambling" title=" scrambling"> scrambling</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20steganography" title=" audio steganography"> audio steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a> </p> <a href="https://publications.waset.org/abstracts/42449/mathematical-model-that-using-scrambling-and-message-integrity-methods-in-audio-steganography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19279</span> Perception of Value Affecting Engagement Through Online Audio Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Apipol%20Penkitti">Apipol Penkitti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The new normal or a new way of life stemmed from the COVID-19 outbreak, gave rise to a new form of social media: audio-based social platforms (ABSPs), known as Clubhouse, Twitter space, and Facebook live audio room. These platforms, on which audio-based communication is featured, became popular in a short span of time. The objective of the research study is to understand ABSPs users’ behaviors in Thailand. The study, in which functional attitude theory, uses and gratifications theory, and social influence theory are referred to, is conducted through consumer perceived utilitarian, hedonic, and social value that affect engagement. This research study is mixed method paradigm, utilizing Model of Triangulation as its framework. The data acquisition is proceeded through questionnaires from a sample of 384 male, female and LGBTQA+ individuals aged 25 - 34 who, from various occupations, have used audio-based social platform applications. This research study employs the structural equation modeling to analyze the relationships between variables, and it uses the semi - structured interviewing to comprehend the rationality of the variables in the study. The study found that hedonic value directly affects engagement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20based%20social%20platform" title="audio based social platform">audio based social platform</a>, <a href="https://publications.waset.org/abstracts/search?q=engagement" title=" engagement"> engagement</a>, <a href="https://publications.waset.org/abstracts/search?q=hedonic" title=" hedonic"> hedonic</a>, <a href="https://publications.waset.org/abstracts/search?q=perceived%20value" title=" perceived value"> perceived value</a>, <a href="https://publications.waset.org/abstracts/search?q=social" title=" social"> social</a>, <a href="https://publications.waset.org/abstracts/search?q=utilitarian" title=" utilitarian"> utilitarian</a> </p> <a href="https://publications.waset.org/abstracts/147744/perception-of-value-affecting-engagement-through-online-audio-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19278</span> Freedom of Expression and Its Restriction in Audiovisual Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sevil%20Yildiz">Sevil Yildiz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Audio visual communication is a type of collective expression. Collective expression activity informs the masses, gives direction to opinions and establishes public opinion. Due to these characteristics, audio visual communication must be subjected to special restrictions. This has been stipulated in both the Constitution and the European Human Rights Agreement. This paper aims to review freedom of expression and its restriction in audio visual media. For this purpose, the authorisation of the Radio and Television Supreme Council to impose sanctions as an independent administrative authority empowered to regulate the field of audio visual communication has been reviewed with regard to freedom of expression and its limits. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20visual%20media" title="audio visual media">audio visual media</a>, <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20expression" title=" freedom of expression"> freedom of expression</a>, <a href="https://publications.waset.org/abstracts/search?q=its%20limits" title=" its limits"> its limits</a>, <a href="https://publications.waset.org/abstracts/search?q=radio%20and%20television%20supreme%20council" title=" radio and television supreme council"> radio and television supreme council</a> </p> <a href="https://publications.waset.org/abstracts/39325/freedom-of-expression-and-its-restriction-in-audiovisual-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39325.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">326</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19277</span> Audio-Visual Recognition Based on Effective Model and Distillation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heng%20Yang">Heng Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Luo"> Tao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yakun%20Zhang"> Yakun Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Wang"> Kai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Qin"> Wei Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Xie"> Liang Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Yan"> Ye Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Erwei%20Yin"> Erwei Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lipreading" title="lipreading">lipreading</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title=" audio-visual"> audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=Efficientnet" title=" Efficientnet"> Efficientnet</a>, <a href="https://publications.waset.org/abstracts/search?q=distillation" title=" distillation"> distillation</a> </p> <a href="https://publications.waset.org/abstracts/146625/audio-visual-recognition-based-on-effective-model-and-distillation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19276</span> Potential Therapeutic Effect of Obestatin in Oral Mucositis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Agnieszka%20Stempniewicz">Agnieszka Stempniewicz</a>, <a href="https://publications.waset.org/abstracts/search?q=Piotr%20Ceranowicz"> Piotr Ceranowicz</a>, <a href="https://publications.waset.org/abstracts/search?q=Wojciech%20Macyk"> Wojciech Macyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Jakub%20Cieszkowski"> Jakub Cieszkowski</a>, <a href="https://publications.waset.org/abstracts/search?q=Beata%20Ku%C5%9Bnierz-Caba%C5%82a"> Beata Kuśnierz-Cabała</a>, <a href="https://publications.waset.org/abstracts/search?q=Katarzyna%20Ga%C5%82%C4%85zka"> Katarzyna Gałązka</a>, <a href="https://publications.waset.org/abstracts/search?q=Zygmunt%20Warzecha"> Zygmunt Warzecha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objectives: There are numerous strategies for the prevention or treatment of oral mucositis. However, their effectiveness is limited and does not correspond to expectations. Recent studies have shown that obestatin exhibits a protective effect and accelerates the healing of gastrointestinal mucosa. The aim of the present study was to examine the influence of obestatin administration on oral ulcers in rats. Methods: lingual ulcers were induced by the use of acetic acid. Rats were treated twice a day intraperitoneally with saline or obestatin(4, 8, or 16 nmol/kg/dose) for five days. The study determined: lingual mucosa morphology, cell proliferation, mucosal blood flow, and mucosal pro-inflammatory interleukin-1β level(IL-1β). Results: In animals without induction of oral ulcers, treatment with obestatin was without any effect. Obestatin administration in rats with lingual ulcers increased the healing rate of these ulcers. Obestatin given at the dose of 8 or 16 nmol/kg/dose caused the strongest and similar therapeutic effect. This result was associated with a significant increase in blood flow and cell proliferation in gingival mucosa, as well as a significant decrease in IL-1β level. Conclusions: Obestatin accelerates the healing of lingual ulcers in rats. This therapeutic effect is well-correlated with an increase in blood flow and cell proliferation in oral mucosa, as well as a decrease in pro-inflammatory IL-1β levels. Obestatin is a potentially useful candidate for the prevention and treatment of oral mucositis. Acknowledgment: Agnieszka Stempniewicz acknowledges the support of InterDokMed project no. POWR.03.02.00- 00-I013/16. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=oral%20mucositis" title="oral mucositis">oral mucositis</a>, <a href="https://publications.waset.org/abstracts/search?q=ulcers" title=" ulcers"> ulcers</a>, <a href="https://publications.waset.org/abstracts/search?q=obestatin" title=" obestatin"> obestatin</a>, <a href="https://publications.waset.org/abstracts/search?q=lingual%20mucosa" title=" lingual mucosa"> lingual mucosa</a> </p> <a href="https://publications.waset.org/abstracts/149974/potential-therapeutic-effect-of-obestatin-in-oral-mucositis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149974.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19275</span> Audio Information Retrieval in Mobile Environment with Fast Audio Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bruno%20T.%20Gomes">Bruno T. Gomes</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20A.%20Menezes"> José A. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Giordano%20Cabral"> Giordano Cabral</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the popularity of smartphones, mobile apps emerge to meet the diverse needs, however the resources at the disposal are limited, either by the hardware, due to the low computing power, or the software, that does not have the same robustness of desktop environment. For example, in automatic audio classification (AC) tasks, musical information retrieval (MIR) subarea, is required a fast processing and a good success rate. However the mobile platform has limited computing power and the best AC tools are only available for desktop. To solve these problems the fast classifier suits, to mobile environments, the most widespread MIR technologies, seeking a balance in terms of speed and robustness. At the end we found that it is possible to enjoy the best of MIR for mobile environments. This paper presents the results obtained and the difficulties encountered. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20classification" title="audio classification">audio classification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20extraction" title=" audio extraction"> audio extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=environment%20mobile" title=" environment mobile"> environment mobile</a>, <a href="https://publications.waset.org/abstracts/search?q=musical%20information%20retrieval" title=" musical information retrieval"> musical information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/36642/audio-information-retrieval-in-mobile-environment-with-fast-audio-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36642.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">545</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19274</span> Genetic Algorithms for Feature Generation in the Context of Audio Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20A.%20Menezes">José A. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Giordano%20Cabral"> Giordano Cabral</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruno%20T.%20Gomes"> Bruno T. Gomes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Choosing good features is an essential part of machine learning. Recent techniques aim to automate this process. For instance, feature learning intends to learn the transformation of raw data into a useful representation to machine learning tasks. In automatic audio classification tasks, this is interesting since the audio, usually complex information, needs to be transformed into a computationally convenient input to process. Another technique tries to generate features by searching a feature space. Genetic algorithms, for instance, have being used to generate audio features by combining or modifying them. We find this approach particularly interesting and, despite the undeniable advances of feature learning approaches, we wanted to take a step forward in the use of genetic algorithms to find audio features, combining them with more conventional methods, like PCA, and inserting search control mechanisms, such as constraints over a confusion matrix. This work presents the results obtained on particular audio classification problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20generation" title="feature generation">feature generation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20learning" title=" feature learning"> feature learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20information%20retrieval" title=" music information retrieval"> music information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/36638/genetic-algorithms-for-feature-generation-in-the-context-of-audio-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36638.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19273</span> Mood Recognition Using Indian Music</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vishwa%20Joshi">Vishwa Joshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study of mood recognition in the field of music has gained a lot of momentum in the recent years with machine learning and data mining techniques and many audio features contributing considerably to analyze and identify the relation of mood plus music. In this paper we consider the same idea forward and come up with making an effort to build a system for automatic recognition of mood underlying the audio song’s clips by mining their audio features and have evaluated several data classification algorithms in order to learn, train and test the model describing the moods of these audio songs and developed an open source framework. Before classification, Preprocessing and Feature Extraction phase is necessary for removing noise and gathering features respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=music" title="music">music</a>, <a href="https://publications.waset.org/abstracts/search?q=mood" title=" mood"> mood</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/24275/mood-recognition-using-indian-music" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24275.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">498</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19272</span> Using Audio-Visual Aids and Computer-Assisted Language Instruction (CALI) to Overcome Learning Difficulties of Listening in Students of Special Needs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadeq%20Al%20Yaari">Sadeq Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Alkhunayn"> Muhammad Alkhunayn</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20Al%20Yaari"> Ayman Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Montaha%20Al%20Yaari"> Montaha Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Adham%20Al%20Yaari"> Adham Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajedah%20Al%20Yaari"> Sajedah Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatehi%20Eissa"> Fatehi Eissa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background & Aims: Audio-visual aids and computer-aided language instruction (CALI) have been documented to improve receptive skills, namely listening skills, in normal students. The increased listening has been attributed to the understanding of other interlocutors' speech, but recent experiments have suggested that audio-visual aids and CALI should be tested against the listening of students of special needs to see the effects of the former in the latter. This investigation described the effect of audio-visual aids and CALI on the performance of these students. Methods: Pre-and-posttests were administered to 40 students of special needs of both sexes at al-Malādh school for students of special needs aged between 8 and 18 years old. A comparison was held between this group of students and another similar group (control group). Whereas the former group underwent a listening course using audio-visual aids and CALI, the latter studied the same course with the same speech language therapist (SLT) with the classical method. The outcomes of the two tests for the two groups were qualitatively and quantitatively analyzed. Results: Significant improvement in the performance was found in the first group (treatment group) (posttest= 72.45% vs. pre-test= 25.55%) in comparison to the second (control) (posttest= 25.55% vs. pre-test= 23.72%). In comparison to the males’ scores, the scores of females are higher (1487 scores vs. 1411 scores). Suggested results support the necessity of the use of audio-visual aids and CALI in teaching listening at the schools of students of special needs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=listening" title="listening">listening</a>, <a href="https://publications.waset.org/abstracts/search?q=receptive%20skills" title=" receptive skills"> receptive skills</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20aids" title=" audio-visual aids"> audio-visual aids</a>, <a href="https://publications.waset.org/abstracts/search?q=CALI" title=" CALI"> CALI</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20needs" title=" special needs"> special needs</a> </p> <a href="https://publications.waset.org/abstracts/186406/using-audio-visual-aids-and-computer-assisted-language-instruction-cali-to-overcome-learning-difficulties-of-listening-in-students-of-special-needs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186406.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">48</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19271</span> Musical Tesla Coil Controlled by an Audio Signal Processed in Matlab</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Cuenca">Sandra Cuenca</a>, <a href="https://publications.waset.org/abstracts/search?q=Danilo%20Santana"> Danilo Santana</a>, <a href="https://publications.waset.org/abstracts/search?q=Anderson%20Reyes"> Anderson Reyes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The following project is based on the manipulation of audio signals through the Matlab software, which has an audio signal that is modified, and its resultant obtained through the auxiliary port of the computer is passed through a signal amplifier whose amplified signal is connected to a tesla coil which has a behavior like a vumeter, the flashes at the output of the tesla coil increase and decrease its intensity depending on the audio signal in the computer and also the voltage source from which it is sent. The amplified signal then passes to the tesla coil being shown in the plasma sphere with the respective flashes; this activation is given through the specified parameters that we want to give in the MATLAB algorithm that contains the digital filters for the manipulation of our audio signal sent to the tesla coil to be displayed in a plasma sphere with flashes of the combination of colors commonly pink and purple that varies according to the tone of the song. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auxiliary%20port" title="auxiliary port">auxiliary port</a>, <a href="https://publications.waset.org/abstracts/search?q=tesla%20coil" title=" tesla coil"> tesla coil</a>, <a href="https://publications.waset.org/abstracts/search?q=vumeter" title=" vumeter"> vumeter</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20sphere" title=" plasma sphere"> plasma sphere</a> </p> <a href="https://publications.waset.org/abstracts/170874/musical-tesla-coil-controlled-by-an-audio-signal-processed-in-matlab" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19270</span> Effective Parameter Selection for Audio-Based Music Mood Classification for Christian Kokborok Song: A Regression-Based Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sanchali%20Das">Sanchali Das</a>, <a href="https://publications.waset.org/abstracts/search?q=Swapan%20Debbarma"> Swapan Debbarma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Music mood classification is developing in both the areas of music information retrieval (MIR) and natural language processing (NLP). Some of the Indian languages like Hindi English etc. have considerable exposure in MIR. But research in mood classification in regional language is very less. In this paper, powerful audio based feature for Kokborok Christian song is identified and mood classification task has been performed. Kokborok is an Indo-Burman language especially spoken in the northeastern part of India and also some other countries like Bangladesh, Myanmar etc. For performing audio-based classification task, useful audio features are taken out by jMIR software. There are some standard audio parameters are there for the audio-based task but as known to all that every language has its unique characteristics. So here, the most significant features which are the best fit for the database of Kokborok song is analysed. The regression-based model is used to find out the independent parameters that act as a predictor and predicts the dependencies of parameters and shows how it will impact on overall classification result. For classification WEKA 3.5 is used, and selected parameters create a classification model. And another model is developed by using all the standard audio features that are used by most of the researcher. In this experiment, the essential parameters that are responsible for effective audio based mood classification and parameters that do not significantly change for each of the Christian Kokborok songs are analysed, and a comparison is also shown between the two above model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christian%20Kokborok%20song" title="Christian Kokborok song">Christian Kokborok song</a>, <a href="https://publications.waset.org/abstracts/search?q=mood%20classification" title=" mood classification"> mood classification</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20information%20retrieval" title=" music information retrieval"> music information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=regression" title=" regression"> regression</a> </p> <a href="https://publications.waset.org/abstracts/97113/effective-parameter-selection-for-audio-based-music-mood-classification-for-christian-kokborok-song-a-regression-based-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97113.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">222</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19269</span> Audio-Visual Entrainment and Acupressure Therapy for Insomnia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mariya%20Yeldhos">Mariya Yeldhos</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Hema"> G. Hema</a>, <a href="https://publications.waset.org/abstracts/search?q=Sowmya%20Narayanan"> Sowmya Narayanan</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Dhiviyalakshmi"> L. Dhiviyalakshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Insomnia is one of the most prevalent psychological disorders worldwide. Some of the deficiencies of the current treatments of insomnia are: side effects in the case of sleeping pills and high costs in the case of psychotherapeutic treatment. In this paper, we propose a device which provides a combination of audio visual entrainment and acupressure based compression therapy for insomnia. This device provides drug-free treatment of insomnia through a user friendly and portable device that enables relaxation of brain and muscles, with certain advantages such as low cost, and wide accessibility to a large number of people. Tools adapted towards the treatment of insomnia: -Audio -Continuous exposure to binaural beats of a particular frequency of audible range -Visual -Flash of LED light -Acupressure points -GB-20 -GV-16 -B-10 <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=insomnia" title="insomnia">insomnia</a>, <a href="https://publications.waset.org/abstracts/search?q=acupressure" title=" acupressure"> acupressure</a>, <a href="https://publications.waset.org/abstracts/search?q=entrainment" title=" entrainment"> entrainment</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20entrainment" title=" audio-visual entrainment"> audio-visual entrainment</a> </p> <a href="https://publications.waset.org/abstracts/16739/audio-visual-entrainment-and-acupressure-therapy-for-insomnia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16739.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">429</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19268</span> A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Analise%20Borg">Analise Borg</a>, <a href="https://publications.waset.org/abstracts/search?q=Paul%20Micallef"> Paul Micallef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20fingerprinting" title="audio fingerprinting">audio fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=mapping%20algorithm" title=" mapping algorithm"> mapping algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20Mixture%20Models" title=" Gaussian Mixture Models"> Gaussian Mixture Models</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=MPEG-7" title=" MPEG-7"> MPEG-7</a> </p> <a href="https://publications.waset.org/abstracts/22201/a-non-parametric-based-mapping-algorithm-for-use-in-audio-fingerprinting" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22201.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19267</span> Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Bryan">T. Bryan </a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Kepuska"> V. Kepuska</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Kostnaic"> I. Kostnaic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sparse%20dictionary%20learning" title="sparse dictionary learning">sparse dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=autoencoder" title=" autoencoder"> autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20autoencoder" title=" sparse autoencoder"> sparse autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=basis%20vectors" title=" basis vectors"> basis vectors</a>, <a href="https://publications.waset.org/abstracts/search?q=atomic%20decomposition" title=" atomic decomposition"> atomic decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=envelope%20sampling" title=" envelope sampling"> envelope sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=envelope%20samples" title=" envelope samples"> envelope samples</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor" title=" Gabor"> Gabor</a>, <a href="https://publications.waset.org/abstracts/search?q=gammatone" title=" gammatone"> gammatone</a>, <a href="https://publications.waset.org/abstracts/search?q=matching%20pursuit" title=" matching pursuit"> matching pursuit</a> </p> <a href="https://publications.waset.org/abstracts/42586/atomic-decomposition-audio-data-compression-and-denoising-using-sparse-dictionary-feature-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42586.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19266</span> Digital Recording System Identification Based on Audio File</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michel%20Kulhandjian">Michel Kulhandjian</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimitris%20A.%20Pados"> Dimitris A. Pados</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification" title="blind system identification">blind system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20fingerprinting" title=" audio fingerprinting"> audio fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title=" blind deconvolution"> blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20dereverberation" title=" blind dereverberation"> blind dereverberation</a> </p> <a href="https://publications.waset.org/abstracts/75122/digital-recording-system-identification-based-on-audio-file" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19265</span> Satisfaction of Distance Education University Students with the Use of Audio Media as a Medium of Instruction: The Case of Mountains of the Moon University in Uganda</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mark%20Kaahwa">Mark Kaahwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang%20Zhu"> Chang Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Moses%20Muhumuza"> Moses Muhumuza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the satisfaction of distance education university students (DEUS) with the use of audio media as a medium of instruction. Studying students&rsquo; satisfaction is vital because it shows whether learners are comfortable with a certain instructional strategy or not. Although previous studies have investigated the use of audio media, the satisfaction of students with an instructional strategy that combines radio teaching and podcasts as an independent teaching strategy has not been fully investigated. In this study, all lectures were delivered through the radio and students had no direct contact with their instructors. No modules or any other material in form of text were given to the students. They instead, revised the taught content by listening to podcasts saved on their mobile electronic gadgets. Prior to data collection, DEUS received orientation through workshops on how to use audio media in distance education. To achieve objectives of the study, a survey, naturalistic observations and face-to-face interviews were used to collect data from a sample of 211 undergraduate and graduate students. Findings indicate that there was no statistically significant difference in the levels of satisfaction between male and female students. The results from post hoc analysis show that there is a statistically significant difference in the levels of satisfaction regarding the use of audio media between diploma and graduate students. Diploma students are more satisfied compared to their graduate counterparts. T-test results reveal that there was no statistically significant difference in the general satisfaction with audio media between rural and urban-based students. And ANOVA results indicate that there is no statistically significant difference in the levels of satisfaction with the use of audio media across age groups. Furthermore, results from observations and interviews reveal that DEUS found learning using audio media a pleasurable medium of instruction. This is an indication that audio media can be considered as an instructional strategy on its own merit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20media" title="audio media">audio media</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20education" title=" distance education"> distance education</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20education%20university%20students" title=" distance education university students"> distance education university students</a>, <a href="https://publications.waset.org/abstracts/search?q=medium%20of%20instruction" title=" medium of instruction"> medium of instruction</a>, <a href="https://publications.waset.org/abstracts/search?q=satisfaction" title=" satisfaction"> satisfaction</a> </p> <a href="https://publications.waset.org/abstracts/100030/satisfaction-of-distance-education-university-students-with-the-use-of-audio-media-as-a-medium-of-instruction-the-case-of-mountains-of-the-moon-university-in-uganda" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100030.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19264</span> Robust and Transparent Spread Spectrum Audio Watermarking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Akbar%20Attari">Ali Akbar Attari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Asghar%20Beheshti%20Shirazi"> Ali Asghar Beheshti Shirazi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a blind and robust audio watermarking scheme based on spread spectrum in Discrete Wavelet Transform (DWT) domain. Watermarks are embedded in the low-frequency coefficients, which is less audible. The key idea is dividing the audio signal into small frames, and magnitude of the 6<sup>th</sup> level of DWT approximation coefficients is modifying based upon the Direct Sequence Spread Spectrum (DSSS) technique. Also, the psychoacoustic model for enhancing in imperceptibility, as well as Savitsky-Golay filter for increasing accuracy in extraction, is used. The experimental results illustrate high robustness against most common attacks, i.e. Gaussian noise addition, Low pass filter, Resampling, Requantizing, MP3 compression, without significant perceptual distortion (ODG is higher than -1). The proposed scheme has about 83 bps data payload. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20watermarking" title="audio watermarking">audio watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=spread%20spectrum" title=" spread spectrum"> spread spectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=psychoacoustic" title=" psychoacoustic"> psychoacoustic</a>, <a href="https://publications.waset.org/abstracts/search?q=Savitsky-Golay%20filter" title=" Savitsky-Golay filter"> Savitsky-Golay filter</a> </p> <a href="https://publications.waset.org/abstracts/86040/robust-and-transparent-spread-spectrum-audio-watermarking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86040.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19263</span> Multi-Level Pulse Width Modulation to Boost the Power Efficiency of Switching Amplifiers for Analog Signals with Very High Crest Factor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jan%20Doutreloigne">Jan Doutreloigne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main goal of this paper is to develop a switching amplifier with optimized power efficiency for analog signals with a very high crest factor such as audio or DSL signals. Theoretical calculations show that a switching amplifier architecture based on multi-level pulse width modulation outperforms all other types of linear or switching amplifiers in that respect. Simulations on a 2 W multi-level switching audio amplifier, designed in a 50 V 0.35 mm IC technology, confirm its superior performance in terms of power efficiency. A real silicon implementation of this audio amplifier design is currently underway to provide experimental validation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20amplifier" title="audio amplifier">audio amplifier</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-level%20switching%20amplifier" title=" multi-level switching amplifier"> multi-level switching amplifier</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20efficiency" title=" power efficiency"> power efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=pulse%20width%20modulation" title=" pulse width modulation"> pulse width modulation</a>, <a href="https://publications.waset.org/abstracts/search?q=PWM" title=" PWM"> PWM</a>, <a href="https://publications.waset.org/abstracts/search?q=self-oscillating%20amplifier" title=" self-oscillating amplifier"> self-oscillating amplifier</a> </p> <a href="https://publications.waset.org/abstracts/82607/multi-level-pulse-width-modulation-to-boost-the-power-efficiency-of-switching-amplifiers-for-analog-signals-with-very-high-crest-factor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82607.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19262</span> Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20C.%20Sharma">S. C. Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Gambhir"> Ankit Gambhir</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajeev%20Arya"> Rajeev Arya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20steganography" title="audio steganography">audio steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20security" title=" data security"> data security</a>, <a href="https://publications.waset.org/abstracts/search?q=DES" title=" DES"> DES</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20steganography" title=" image steganography"> image steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=intruder" title=" intruder"> intruder</a>, <a href="https://publications.waset.org/abstracts/search?q=RSA" title=" RSA"> RSA</a>, <a href="https://publications.waset.org/abstracts/search?q=steganography" title=" steganography"> steganography</a> </p> <a href="https://publications.waset.org/abstracts/71013/implementation-and-performance-analysis-of-data-encryption-standard-and-rsa-algorithm-with-image-steganography-and-audio-steganography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71013.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">290</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19261</span> Agricultural Education by Media in Yogyakarta, Indonesia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Retno%20Dwi%20Wahyuningrum">Retno Dwi Wahyuningrum</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunarru%20Samsi%20Hariadi"> Sunarru Samsi Hariadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Education in agriculture is very significant; in a way that it can support farmers to improve their business. This can be done through certain media, such as printed, audio, and audio-visual media. To find out the effects of the media toward the knowledge, attitude, and motivation of farmers in order to adopt innovation, the study was conducted on 342 farmers, randomly selected from 12 farmer-groups, in the districts of Sleman and Bantul, Special Region of Yogyakarta Province. The study started from October 2014 to November 2015 by interviewing the respondents using a questionnaire which included 20 questions on knowledge, 20 questions on attitude, and 20 questions on adopting motivation. The data for the attitude and the adopting motivation were processed into Likert scale, then it was tested for validity and reliability. Differences in the levels of knowledge, attitude, and motivation were tested based on percentage of average score intervals of them and categorized into five interpretation levels. The results show that printed, audio, and audio-visual media give different impacts to the farmers. First, all media make farmers very aware to agricultural innovation, but the highest percentage is on theatrical play. Second, the most effective media to raise the attitude is interactive dialogue on Radio. Finally, printed media, especially comic, is the most effective way to improve the adopting motivation of farmers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=agricultural%20education" title="agricultural education">agricultural education</a>, <a href="https://publications.waset.org/abstracts/search?q=printed%20media" title=" printed media"> printed media</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20media" title=" audio media"> audio media</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20media" title=" audio-visual media"> audio-visual media</a>, <a href="https://publications.waset.org/abstracts/search?q=farmer%20knowledge" title=" farmer knowledge"> farmer knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=farmer%20attitude" title=" farmer attitude"> farmer attitude</a>, <a href="https://publications.waset.org/abstracts/search?q=farmer%20adopting%20motivation" title=" farmer adopting motivation"> farmer adopting motivation</a> </p> <a href="https://publications.waset.org/abstracts/77915/agricultural-education-by-media-in-yogyakarta-indonesia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19260</span> Using Audio-Visual Aids and Computer-Assisted Language Instruction to Overcome Learning Difficulties of Reading in Students of Special Needs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadeq%20Al%20Yaari">Sadeq Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20Al%20Yaari"> Ayman Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Adham%20Al%20Yaari"> Adham Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Montaha%20Al%20Yaari"> Montaha Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Aayah%20Al%20Yaari"> Aayah Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajedah%20Al%20Yaari"> Sajedah Al Yaari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background & aims: Reading is a receptive skill whose importance could involve abilities' variance from linguistic standard. Several evidences support the hypothesis stating that the more you read the better you write, with a different impact for speech language therapists (SLTs) who use audio-visual aids and computer-assisted language instruction (CALI) and those who do not. Methods: Here we made use of audio-visual aids and CALI for teaching reading skill to a group of 40 students of special needs of both sexes (range between 8 and 18 years old) at al-Malādh school for teaching students of special needs in Dhamar (Yemen) while another group of the same number is taught using ordinary teaching methods. Pre-and-posttests have been administered at the beginning and the end of the semester (Before and after teaching the reading course). The purpose was to understand the differences between the levels of the students of special needs to see to what extent audio-visual aids and CALI are useful for them. The two groups were taught by the same instructor under the same circumstances in the same school. Both quantitative and qualitative procedures were used to analyze the data. Results: The overall findings revealed that audio-visual aids and CALI are very useful for teaching reading to students of special needs and this can be seen in the scores of the treatment group’s subjects (7.0%, in post-test vs.2.5% in pre-test). In comparison to the scores of the second group’s subjects (where audio-visual aids and CALI were not used) (2.2% in both pre-and-posttests), the first group subjects have overcome reading tasks and this can be observed in their performance in the posttest. Compared with males, females’ performance was better (1466 scores (7.3%) vs. 1371 scores (6.8%). Qualitative and statistical analyses showed that such comprehension is absolutely due to the use of audio-visual aids and CALI and nothing else. These outcomes confirm the evidence of the significance of using audio-visual aids and CALI as effective means for teaching receptive skills in general and reading skill in particular. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=reading" title="reading">reading</a>, <a href="https://publications.waset.org/abstracts/search?q=receptive%20skills" title=" receptive skills"> receptive skills</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20aids" title=" audio-visual aids"> audio-visual aids</a>, <a href="https://publications.waset.org/abstracts/search?q=CALI" title=" CALI"> CALI</a>, <a href="https://publications.waset.org/abstracts/search?q=students" title=" students"> students</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20needs" title=" special needs"> special needs</a>, <a href="https://publications.waset.org/abstracts/search?q=SLTs" title=" SLTs"> SLTs</a> </p> <a href="https://publications.waset.org/abstracts/186624/using-audio-visual-aids-and-computer-assisted-language-instruction-to-overcome-learning-difficulties-of-reading-in-students-of-special-needs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186624.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">49</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=642">642</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=643">643</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio%20lingual%20method&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10