CINXE.COM
Search results for: influence of audio
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: influence of audio</title> <meta name="description" content="Search results for: influence of audio"> <meta name="keywords" content="influence of audio"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="influence of audio" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="influence of audio"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 8025</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: influence of audio</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8025</span> The Influence of Audio on Perceived Quality of Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Silvio%20Ricardo%20Rodrigues%20Sanches">Silvio Ricardo Rodrigues Sanches</a>, <a href="https://publications.waset.org/abstracts/search?q=Bianca%20Cogo%20Barbosa"> Bianca Cogo Barbosa</a>, <a href="https://publications.waset.org/abstracts/search?q=Beatriz%20Regina%20Brum"> Beatriz Regina Brum</a>, <a href="https://publications.waset.org/abstracts/search?q=Cl%C3%A9ber%20Gimenez%20Corr%C3%AAa"> Cléber Gimenez Corrêa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20substitution" title="background substitution">background substitution</a>, <a href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio" title=" influence of audio"> influence of audio</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20evaluation" title=" segmentation evaluation"> segmentation evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20quality" title=" segmentation quality"> segmentation quality</a> </p> <a href="https://publications.waset.org/abstracts/148456/the-influence-of-audio-on-perceived-quality-of-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148456.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8024</span> The Influence of Audio-Visual Resources in Teaching Business Subjects in Selected Secondary Schools in Ifako Ijaiye Local Government Area of Lagos State, Nigeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Oluwole%20Victor%20Falobi">Oluwole Victor Falobi</a>, <a href="https://publications.waset.org/abstracts/search?q=Lawrence%20Olusola%20Ige"> Lawrence Olusola Ige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The cardinal drawing force of this study is to examine the influence of audio-visual resources in teaching business subjects in selected secondary schools in IfakoIjaiye Local Government Area of Lagos State, Nigeria. A descriptive survey research design was employed for the study. By using a quantitative research approach and a sample size of 120 students were randomly selected from four public schools. Three research questions with one hypothesis guided the study. Data collected were analysed using frequency, the mean and standard deviation for the research questions, and Pearson Product Moment Correlation PPMC were used to analysed the inferential statistic. Findings from the study revealed that the Influence of audio-visual resources in teaching business subjects in selected secondary schools in IfakoIjaiye Local Government Area of Lagos State is low. It further revealed data the knowledge of teachers on the use of audio-visual resources is high in Ifako Local Government Area. It was recommended that government should create a timely monitoring system in other to check secondary school laboratories and classrooms to replace outdated facilities and also purchase needed facilities for effective teaching and learning to take place. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20resources" title="audio-visual resources">audio-visual resources</a>, <a href="https://publications.waset.org/abstracts/search?q=business%20subjects" title=" business subjects"> business subjects</a>, <a href="https://publications.waset.org/abstracts/search?q=school" title=" school"> school</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching" title=" teaching"> teaching</a> </p> <a href="https://publications.waset.org/abstracts/154383/the-influence-of-audio-visual-resources-in-teaching-business-subjects-in-selected-secondary-schools-in-ifako-ijaiye-local-government-area-of-lagos-state-nigeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154383.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8023</span> Perception of Value Affecting Engagement Through Online Audio Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Apipol%20Penkitti">Apipol Penkitti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The new normal or a new way of life stemmed from the COVID-19 outbreak, gave rise to a new form of social media: audio-based social platforms (ABSPs), known as Clubhouse, Twitter space, and Facebook live audio room. These platforms, on which audio-based communication is featured, became popular in a short span of time. The objective of the research study is to understand ABSPs users’ behaviors in Thailand. The study, in which functional attitude theory, uses and gratifications theory, and social influence theory are referred to, is conducted through consumer perceived utilitarian, hedonic, and social value that affect engagement. This research study is mixed method paradigm, utilizing Model of Triangulation as its framework. The data acquisition is proceeded through questionnaires from a sample of 384 male, female and LGBTQA+ individuals aged 25 - 34 who, from various occupations, have used audio-based social platform applications. This research study employs the structural equation modeling to analyze the relationships between variables, and it uses the semi - structured interviewing to comprehend the rationality of the variables in the study. The study found that hedonic value directly affects engagement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20based%20social%20platform" title="audio based social platform">audio based social platform</a>, <a href="https://publications.waset.org/abstracts/search?q=engagement" title=" engagement"> engagement</a>, <a href="https://publications.waset.org/abstracts/search?q=hedonic" title=" hedonic"> hedonic</a>, <a href="https://publications.waset.org/abstracts/search?q=perceived%20value" title=" perceived value"> perceived value</a>, <a href="https://publications.waset.org/abstracts/search?q=social" title=" social"> social</a>, <a href="https://publications.waset.org/abstracts/search?q=utilitarian" title=" utilitarian"> utilitarian</a> </p> <a href="https://publications.waset.org/abstracts/147744/perception-of-value-affecting-engagement-through-online-audio-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8022</span> Spatial Audio Player Using Musical Genre Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun-Yong%20Lee">Jun-Yong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Gook%20Kim"> Hyoung-Gook Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20equalization" title="automatic equalization">automatic equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=genre%20classification" title=" genre classification"> genre classification</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20segment%20detection" title=" music segment detection"> music segment detection</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20audio%20processing" title=" spatial audio processing"> spatial audio processing</a> </p> <a href="https://publications.waset.org/abstracts/7561/spatial-audio-player-using-musical-genre-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7561.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">429</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8021</span> Mathematical Model That Using Scrambling and Message Integrity Methods in Audio Steganography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Salem%20Atoum">Mohammed Salem Atoum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The success of audio steganography is to ensure imperceptibility of the embedded message in stego file and withstand any form of intentional or un-intentional degradation of message (robustness). Audio steganographic that utilized LSB of audio stream to embed message gain a lot of popularity over the years in meeting the perceptual transparency, robustness and capacity. This research proposes an XLSB technique in order to circumvent the weakness observed in LSB technique. Scrambling technique is introduce in two steps; partitioning the message into blocks followed by permutation each blocks in order to confuse the contents of the message. The message is embedded in the MP3 audio sample. After extracting the message, the permutation codebook is used to re-order it into its original form. Md5sum and SHA-256 are used to verify whether the message is altered or not during transmission. Experimental result shows that the XLSB performs better than LSB. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=XLSB" title="XLSB">XLSB</a>, <a href="https://publications.waset.org/abstracts/search?q=scrambling" title=" scrambling"> scrambling</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20steganography" title=" audio steganography"> audio steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a> </p> <a href="https://publications.waset.org/abstracts/42449/mathematical-model-that-using-scrambling-and-message-integrity-methods-in-audio-steganography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8020</span> Freedom of Expression and Its Restriction in Audiovisual Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sevil%20Yildiz">Sevil Yildiz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Audio visual communication is a type of collective expression. Collective expression activity informs the masses, gives direction to opinions and establishes public opinion. Due to these characteristics, audio visual communication must be subjected to special restrictions. This has been stipulated in both the Constitution and the European Human Rights Agreement. This paper aims to review freedom of expression and its restriction in audio visual media. For this purpose, the authorisation of the Radio and Television Supreme Council to impose sanctions as an independent administrative authority empowered to regulate the field of audio visual communication has been reviewed with regard to freedom of expression and its limits. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20visual%20media" title="audio visual media">audio visual media</a>, <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20expression" title=" freedom of expression"> freedom of expression</a>, <a href="https://publications.waset.org/abstracts/search?q=its%20limits" title=" its limits"> its limits</a>, <a href="https://publications.waset.org/abstracts/search?q=radio%20and%20television%20supreme%20council" title=" radio and television supreme council"> radio and television supreme council</a> </p> <a href="https://publications.waset.org/abstracts/39325/freedom-of-expression-and-its-restriction-in-audiovisual-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39325.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">326</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8019</span> Audio Information Retrieval in Mobile Environment with Fast Audio Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bruno%20T.%20Gomes">Bruno T. Gomes</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20A.%20Menezes"> José A. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Giordano%20Cabral"> Giordano Cabral</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the popularity of smartphones, mobile apps emerge to meet the diverse needs, however the resources at the disposal are limited, either by the hardware, due to the low computing power, or the software, that does not have the same robustness of desktop environment. For example, in automatic audio classification (AC) tasks, musical information retrieval (MIR) subarea, is required a fast processing and a good success rate. However the mobile platform has limited computing power and the best AC tools are only available for desktop. To solve these problems the fast classifier suits, to mobile environments, the most widespread MIR technologies, seeking a balance in terms of speed and robustness. At the end we found that it is possible to enjoy the best of MIR for mobile environments. This paper presents the results obtained and the difficulties encountered. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20classification" title="audio classification">audio classification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20extraction" title=" audio extraction"> audio extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=environment%20mobile" title=" environment mobile"> environment mobile</a>, <a href="https://publications.waset.org/abstracts/search?q=musical%20information%20retrieval" title=" musical information retrieval"> musical information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/36642/audio-information-retrieval-in-mobile-environment-with-fast-audio-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36642.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">545</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8018</span> Genetic Algorithms for Feature Generation in the Context of Audio Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20A.%20Menezes">José A. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Giordano%20Cabral"> Giordano Cabral</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruno%20T.%20Gomes"> Bruno T. Gomes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Choosing good features is an essential part of machine learning. Recent techniques aim to automate this process. For instance, feature learning intends to learn the transformation of raw data into a useful representation to machine learning tasks. In automatic audio classification tasks, this is interesting since the audio, usually complex information, needs to be transformed into a computationally convenient input to process. Another technique tries to generate features by searching a feature space. Genetic algorithms, for instance, have being used to generate audio features by combining or modifying them. We find this approach particularly interesting and, despite the undeniable advances of feature learning approaches, we wanted to take a step forward in the use of genetic algorithms to find audio features, combining them with more conventional methods, like PCA, and inserting search control mechanisms, such as constraints over a confusion matrix. This work presents the results obtained on particular audio classification problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20generation" title="feature generation">feature generation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20learning" title=" feature learning"> feature learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20information%20retrieval" title=" music information retrieval"> music information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/36638/genetic-algorithms-for-feature-generation-in-the-context-of-audio-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36638.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8017</span> Mood Recognition Using Indian Music</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vishwa%20Joshi">Vishwa Joshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study of mood recognition in the field of music has gained a lot of momentum in the recent years with machine learning and data mining techniques and many audio features contributing considerably to analyze and identify the relation of mood plus music. In this paper we consider the same idea forward and come up with making an effort to build a system for automatic recognition of mood underlying the audio song’s clips by mining their audio features and have evaluated several data classification algorithms in order to learn, train and test the model describing the moods of these audio songs and developed an open source framework. Before classification, Preprocessing and Feature Extraction phase is necessary for removing noise and gathering features respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=music" title="music">music</a>, <a href="https://publications.waset.org/abstracts/search?q=mood" title=" mood"> mood</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/24275/mood-recognition-using-indian-music" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24275.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">498</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8016</span> Musical Tesla Coil Controlled by an Audio Signal Processed in Matlab</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Cuenca">Sandra Cuenca</a>, <a href="https://publications.waset.org/abstracts/search?q=Danilo%20Santana"> Danilo Santana</a>, <a href="https://publications.waset.org/abstracts/search?q=Anderson%20Reyes"> Anderson Reyes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The following project is based on the manipulation of audio signals through the Matlab software, which has an audio signal that is modified, and its resultant obtained through the auxiliary port of the computer is passed through a signal amplifier whose amplified signal is connected to a tesla coil which has a behavior like a vumeter, the flashes at the output of the tesla coil increase and decrease its intensity depending on the audio signal in the computer and also the voltage source from which it is sent. The amplified signal then passes to the tesla coil being shown in the plasma sphere with the respective flashes; this activation is given through the specified parameters that we want to give in the MATLAB algorithm that contains the digital filters for the manipulation of our audio signal sent to the tesla coil to be displayed in a plasma sphere with flashes of the combination of colors commonly pink and purple that varies according to the tone of the song. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auxiliary%20port" title="auxiliary port">auxiliary port</a>, <a href="https://publications.waset.org/abstracts/search?q=tesla%20coil" title=" tesla coil"> tesla coil</a>, <a href="https://publications.waset.org/abstracts/search?q=vumeter" title=" vumeter"> vumeter</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20sphere" title=" plasma sphere"> plasma sphere</a> </p> <a href="https://publications.waset.org/abstracts/170874/musical-tesla-coil-controlled-by-an-audio-signal-processed-in-matlab" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8015</span> Audio-Visual Aids and the Secondary School Teaching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shrikrishna%20Mishra">Shrikrishna Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Badri%20Yadav"> Badri Yadav</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this complex society of today where experiences are innumerable and varied, it is not at all possible to present every situation in its original colors hence the opportunities for learning by actual experiences always are not at all possible. It is only through the use of proper audio visual aids that the life situation can be trough in the class room by an enlightened teacher in their simplest form and representing the original to the highest point of similarity which is totally absent in the verbal or lecture method. In the presence of audio aids, the attention is attracted interest roused and suitable atmosphere for proper understanding is automatically created, but in the existing traditional method greater efforts are to be made in order to achieve the aforesaid essential requisite. Inspire of the best and sincere efforts on the side of the teacher the net effect as regards understanding or learning in general is quite negligible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Audio-Visual%20Aids" title="Audio-Visual Aids">Audio-Visual Aids</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20secondary%20school%20teaching" title=" the secondary school teaching"> the secondary school teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=complex%20society" title=" complex society"> complex society</a>, <a href="https://publications.waset.org/abstracts/search?q=audio" title=" audio"> audio</a> </p> <a href="https://publications.waset.org/abstracts/16270/audio-visual-aids-and-the-secondary-school-teaching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16270.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">482</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8014</span> Effective Parameter Selection for Audio-Based Music Mood Classification for Christian Kokborok Song: A Regression-Based Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sanchali%20Das">Sanchali Das</a>, <a href="https://publications.waset.org/abstracts/search?q=Swapan%20Debbarma"> Swapan Debbarma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Music mood classification is developing in both the areas of music information retrieval (MIR) and natural language processing (NLP). Some of the Indian languages like Hindi English etc. have considerable exposure in MIR. But research in mood classification in regional language is very less. In this paper, powerful audio based feature for Kokborok Christian song is identified and mood classification task has been performed. Kokborok is an Indo-Burman language especially spoken in the northeastern part of India and also some other countries like Bangladesh, Myanmar etc. For performing audio-based classification task, useful audio features are taken out by jMIR software. There are some standard audio parameters are there for the audio-based task but as known to all that every language has its unique characteristics. So here, the most significant features which are the best fit for the database of Kokborok song is analysed. The regression-based model is used to find out the independent parameters that act as a predictor and predicts the dependencies of parameters and shows how it will impact on overall classification result. For classification WEKA 3.5 is used, and selected parameters create a classification model. And another model is developed by using all the standard audio features that are used by most of the researcher. In this experiment, the essential parameters that are responsible for effective audio based mood classification and parameters that do not significantly change for each of the Christian Kokborok songs are analysed, and a comparison is also shown between the two above model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christian%20Kokborok%20song" title="Christian Kokborok song">Christian Kokborok song</a>, <a href="https://publications.waset.org/abstracts/search?q=mood%20classification" title=" mood classification"> mood classification</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20information%20retrieval" title=" music information retrieval"> music information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=regression" title=" regression"> regression</a> </p> <a href="https://publications.waset.org/abstracts/97113/effective-parameter-selection-for-audio-based-music-mood-classification-for-christian-kokborok-song-a-regression-based-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97113.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">222</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8013</span> A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jangje%20Park">Jangje Park</a>, <a href="https://publications.waset.org/abstracts/search?q=So%20Young%20Kim"> So Young Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20frequency" title="audio frequency">audio frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=buzz%20noise" title=" buzz noise"> buzz noise</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20bounce" title=" ground bounce"> ground bounce</a>, <a href="https://publications.waset.org/abstracts/search?q=microphone%20grounding" title=" microphone grounding"> microphone grounding</a> </p> <a href="https://publications.waset.org/abstracts/150713/a-study-on-the-improvement-of-mobile-device-call-buzz-noise-caused-by-audio-frequency-ground-bounce" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150713.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8012</span> Audio-Visual Entrainment and Acupressure Therapy for Insomnia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mariya%20Yeldhos">Mariya Yeldhos</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Hema"> G. Hema</a>, <a href="https://publications.waset.org/abstracts/search?q=Sowmya%20Narayanan"> Sowmya Narayanan</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Dhiviyalakshmi"> L. Dhiviyalakshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Insomnia is one of the most prevalent psychological disorders worldwide. Some of the deficiencies of the current treatments of insomnia are: side effects in the case of sleeping pills and high costs in the case of psychotherapeutic treatment. In this paper, we propose a device which provides a combination of audio visual entrainment and acupressure based compression therapy for insomnia. This device provides drug-free treatment of insomnia through a user friendly and portable device that enables relaxation of brain and muscles, with certain advantages such as low cost, and wide accessibility to a large number of people. Tools adapted towards the treatment of insomnia: -Audio -Continuous exposure to binaural beats of a particular frequency of audible range -Visual -Flash of LED light -Acupressure points -GB-20 -GV-16 -B-10 <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=insomnia" title="insomnia">insomnia</a>, <a href="https://publications.waset.org/abstracts/search?q=acupressure" title=" acupressure"> acupressure</a>, <a href="https://publications.waset.org/abstracts/search?q=entrainment" title=" entrainment"> entrainment</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20entrainment" title=" audio-visual entrainment"> audio-visual entrainment</a> </p> <a href="https://publications.waset.org/abstracts/16739/audio-visual-entrainment-and-acupressure-therapy-for-insomnia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16739.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">429</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8011</span> A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Analise%20Borg">Analise Borg</a>, <a href="https://publications.waset.org/abstracts/search?q=Paul%20Micallef"> Paul Micallef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20fingerprinting" title="audio fingerprinting">audio fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=mapping%20algorithm" title=" mapping algorithm"> mapping algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20Mixture%20Models" title=" Gaussian Mixture Models"> Gaussian Mixture Models</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=MPEG-7" title=" MPEG-7"> MPEG-7</a> </p> <a href="https://publications.waset.org/abstracts/22201/a-non-parametric-based-mapping-algorithm-for-use-in-audio-fingerprinting" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22201.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8010</span> Digital Recording System Identification Based on Audio File</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michel%20Kulhandjian">Michel Kulhandjian</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimitris%20A.%20Pados"> Dimitris A. Pados</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification" title="blind system identification">blind system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20fingerprinting" title=" audio fingerprinting"> audio fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title=" blind deconvolution"> blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20dereverberation" title=" blind dereverberation"> blind dereverberation</a> </p> <a href="https://publications.waset.org/abstracts/75122/digital-recording-system-identification-based-on-audio-file" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8009</span> Audio-Visual Recognition Based on Effective Model and Distillation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heng%20Yang">Heng Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Luo"> Tao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yakun%20Zhang"> Yakun Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Wang"> Kai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Qin"> Wei Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Xie"> Liang Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Yan"> Ye Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Erwei%20Yin"> Erwei Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lipreading" title="lipreading">lipreading</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title=" audio-visual"> audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=Efficientnet" title=" Efficientnet"> Efficientnet</a>, <a href="https://publications.waset.org/abstracts/search?q=distillation" title=" distillation"> distillation</a> </p> <a href="https://publications.waset.org/abstracts/146625/audio-visual-recognition-based-on-effective-model-and-distillation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8008</span> Satisfaction of Distance Education University Students with the Use of Audio Media as a Medium of Instruction: The Case of Mountains of the Moon University in Uganda</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mark%20Kaahwa">Mark Kaahwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang%20Zhu"> Chang Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Moses%20Muhumuza"> Moses Muhumuza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the satisfaction of distance education university students (DEUS) with the use of audio media as a medium of instruction. Studying students’ satisfaction is vital because it shows whether learners are comfortable with a certain instructional strategy or not. Although previous studies have investigated the use of audio media, the satisfaction of students with an instructional strategy that combines radio teaching and podcasts as an independent teaching strategy has not been fully investigated. In this study, all lectures were delivered through the radio and students had no direct contact with their instructors. No modules or any other material in form of text were given to the students. They instead, revised the taught content by listening to podcasts saved on their mobile electronic gadgets. Prior to data collection, DEUS received orientation through workshops on how to use audio media in distance education. To achieve objectives of the study, a survey, naturalistic observations and face-to-face interviews were used to collect data from a sample of 211 undergraduate and graduate students. Findings indicate that there was no statistically significant difference in the levels of satisfaction between male and female students. The results from post hoc analysis show that there is a statistically significant difference in the levels of satisfaction regarding the use of audio media between diploma and graduate students. Diploma students are more satisfied compared to their graduate counterparts. T-test results reveal that there was no statistically significant difference in the general satisfaction with audio media between rural and urban-based students. And ANOVA results indicate that there is no statistically significant difference in the levels of satisfaction with the use of audio media across age groups. Furthermore, results from observations and interviews reveal that DEUS found learning using audio media a pleasurable medium of instruction. This is an indication that audio media can be considered as an instructional strategy on its own merit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20media" title="audio media">audio media</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20education" title=" distance education"> distance education</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20education%20university%20students" title=" distance education university students"> distance education university students</a>, <a href="https://publications.waset.org/abstracts/search?q=medium%20of%20instruction" title=" medium of instruction"> medium of instruction</a>, <a href="https://publications.waset.org/abstracts/search?q=satisfaction" title=" satisfaction"> satisfaction</a> </p> <a href="https://publications.waset.org/abstracts/100030/satisfaction-of-distance-education-university-students-with-the-use-of-audio-media-as-a-medium-of-instruction-the-case-of-mountains-of-the-moon-university-in-uganda" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100030.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8007</span> Robust and Transparent Spread Spectrum Audio Watermarking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Akbar%20Attari">Ali Akbar Attari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Asghar%20Beheshti%20Shirazi"> Ali Asghar Beheshti Shirazi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a blind and robust audio watermarking scheme based on spread spectrum in Discrete Wavelet Transform (DWT) domain. Watermarks are embedded in the low-frequency coefficients, which is less audible. The key idea is dividing the audio signal into small frames, and magnitude of the 6<sup>th</sup> level of DWT approximation coefficients is modifying based upon the Direct Sequence Spread Spectrum (DSSS) technique. Also, the psychoacoustic model for enhancing in imperceptibility, as well as Savitsky-Golay filter for increasing accuracy in extraction, is used. The experimental results illustrate high robustness against most common attacks, i.e. Gaussian noise addition, Low pass filter, Resampling, Requantizing, MP3 compression, without significant perceptual distortion (ODG is higher than -1). The proposed scheme has about 83 bps data payload. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20watermarking" title="audio watermarking">audio watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=spread%20spectrum" title=" spread spectrum"> spread spectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=psychoacoustic" title=" psychoacoustic"> psychoacoustic</a>, <a href="https://publications.waset.org/abstracts/search?q=Savitsky-Golay%20filter" title=" Savitsky-Golay filter"> Savitsky-Golay filter</a> </p> <a href="https://publications.waset.org/abstracts/86040/robust-and-transparent-spread-spectrum-audio-watermarking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86040.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8006</span> Using Audio-Visual Aids and Computer-Assisted Language Instruction (CALI) to Overcome Learning Difficulties of Listening in Students of Special Needs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadeq%20Al%20Yaari">Sadeq Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Alkhunayn"> Muhammad Alkhunayn</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20Al%20Yaari"> Ayman Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Montaha%20Al%20Yaari"> Montaha Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Adham%20Al%20Yaari"> Adham Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajedah%20Al%20Yaari"> Sajedah Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatehi%20Eissa"> Fatehi Eissa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background & Aims: Audio-visual aids and computer-aided language instruction (CALI) have been documented to improve receptive skills, namely listening skills, in normal students. The increased listening has been attributed to the understanding of other interlocutors' speech, but recent experiments have suggested that audio-visual aids and CALI should be tested against the listening of students of special needs to see the effects of the former in the latter. This investigation described the effect of audio-visual aids and CALI on the performance of these students. Methods: Pre-and-posttests were administered to 40 students of special needs of both sexes at al-Malādh school for students of special needs aged between 8 and 18 years old. A comparison was held between this group of students and another similar group (control group). Whereas the former group underwent a listening course using audio-visual aids and CALI, the latter studied the same course with the same speech language therapist (SLT) with the classical method. The outcomes of the two tests for the two groups were qualitatively and quantitatively analyzed. Results: Significant improvement in the performance was found in the first group (treatment group) (posttest= 72.45% vs. pre-test= 25.55%) in comparison to the second (control) (posttest= 25.55% vs. pre-test= 23.72%). In comparison to the males’ scores, the scores of females are higher (1487 scores vs. 1411 scores). Suggested results support the necessity of the use of audio-visual aids and CALI in teaching listening at the schools of students of special needs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=listening" title="listening">listening</a>, <a href="https://publications.waset.org/abstracts/search?q=receptive%20skills" title=" receptive skills"> receptive skills</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20aids" title=" audio-visual aids"> audio-visual aids</a>, <a href="https://publications.waset.org/abstracts/search?q=CALI" title=" CALI"> CALI</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20needs" title=" special needs"> special needs</a> </p> <a href="https://publications.waset.org/abstracts/186406/using-audio-visual-aids-and-computer-assisted-language-instruction-cali-to-overcome-learning-difficulties-of-listening-in-students-of-special-needs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186406.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">48</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8005</span> Multi-Level Pulse Width Modulation to Boost the Power Efficiency of Switching Amplifiers for Analog Signals with Very High Crest Factor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jan%20Doutreloigne">Jan Doutreloigne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main goal of this paper is to develop a switching amplifier with optimized power efficiency for analog signals with a very high crest factor such as audio or DSL signals. Theoretical calculations show that a switching amplifier architecture based on multi-level pulse width modulation outperforms all other types of linear or switching amplifiers in that respect. Simulations on a 2 W multi-level switching audio amplifier, designed in a 50 V 0.35 mm IC technology, confirm its superior performance in terms of power efficiency. A real silicon implementation of this audio amplifier design is currently underway to provide experimental validation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20amplifier" title="audio amplifier">audio amplifier</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-level%20switching%20amplifier" title=" multi-level switching amplifier"> multi-level switching amplifier</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20efficiency" title=" power efficiency"> power efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=pulse%20width%20modulation" title=" pulse width modulation"> pulse width modulation</a>, <a href="https://publications.waset.org/abstracts/search?q=PWM" title=" PWM"> PWM</a>, <a href="https://publications.waset.org/abstracts/search?q=self-oscillating%20amplifier" title=" self-oscillating amplifier"> self-oscillating amplifier</a> </p> <a href="https://publications.waset.org/abstracts/82607/multi-level-pulse-width-modulation-to-boost-the-power-efficiency-of-switching-amplifiers-for-analog-signals-with-very-high-crest-factor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82607.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8004</span> Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20C.%20Sharma">S. C. Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Gambhir"> Ankit Gambhir</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajeev%20Arya"> Rajeev Arya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20steganography" title="audio steganography">audio steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20security" title=" data security"> data security</a>, <a href="https://publications.waset.org/abstracts/search?q=DES" title=" DES"> DES</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20steganography" title=" image steganography"> image steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=intruder" title=" intruder"> intruder</a>, <a href="https://publications.waset.org/abstracts/search?q=RSA" title=" RSA"> RSA</a>, <a href="https://publications.waset.org/abstracts/search?q=steganography" title=" steganography"> steganography</a> </p> <a href="https://publications.waset.org/abstracts/71013/implementation-and-performance-analysis-of-data-encryption-standard-and-rsa-algorithm-with-image-steganography-and-audio-steganography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71013.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">290</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8003</span> Agricultural Education by Media in Yogyakarta, Indonesia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Retno%20Dwi%20Wahyuningrum">Retno Dwi Wahyuningrum</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunarru%20Samsi%20Hariadi"> Sunarru Samsi Hariadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Education in agriculture is very significant; in a way that it can support farmers to improve their business. This can be done through certain media, such as printed, audio, and audio-visual media. To find out the effects of the media toward the knowledge, attitude, and motivation of farmers in order to adopt innovation, the study was conducted on 342 farmers, randomly selected from 12 farmer-groups, in the districts of Sleman and Bantul, Special Region of Yogyakarta Province. The study started from October 2014 to November 2015 by interviewing the respondents using a questionnaire which included 20 questions on knowledge, 20 questions on attitude, and 20 questions on adopting motivation. The data for the attitude and the adopting motivation were processed into Likert scale, then it was tested for validity and reliability. Differences in the levels of knowledge, attitude, and motivation were tested based on percentage of average score intervals of them and categorized into five interpretation levels. The results show that printed, audio, and audio-visual media give different impacts to the farmers. First, all media make farmers very aware to agricultural innovation, but the highest percentage is on theatrical play. Second, the most effective media to raise the attitude is interactive dialogue on Radio. Finally, printed media, especially comic, is the most effective way to improve the adopting motivation of farmers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=agricultural%20education" title="agricultural education">agricultural education</a>, <a href="https://publications.waset.org/abstracts/search?q=printed%20media" title=" printed media"> printed media</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20media" title=" audio media"> audio media</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20media" title=" audio-visual media"> audio-visual media</a>, <a href="https://publications.waset.org/abstracts/search?q=farmer%20knowledge" title=" farmer knowledge"> farmer knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=farmer%20attitude" title=" farmer attitude"> farmer attitude</a>, <a href="https://publications.waset.org/abstracts/search?q=farmer%20adopting%20motivation" title=" farmer adopting motivation"> farmer adopting motivation</a> </p> <a href="https://publications.waset.org/abstracts/77915/agricultural-education-by-media-in-yogyakarta-indonesia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8002</span> Using Audio-Visual Aids and Computer-Assisted Language Instruction to Overcome Learning Difficulties of Reading in Students of Special Needs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadeq%20Al%20Yaari">Sadeq Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20Al%20Yaari"> Ayman Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Adham%20Al%20Yaari"> Adham Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Montaha%20Al%20Yaari"> Montaha Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Aayah%20Al%20Yaari"> Aayah Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajedah%20Al%20Yaari"> Sajedah Al Yaari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background & aims: Reading is a receptive skill whose importance could involve abilities' variance from linguistic standard. Several evidences support the hypothesis stating that the more you read the better you write, with a different impact for speech language therapists (SLTs) who use audio-visual aids and computer-assisted language instruction (CALI) and those who do not. Methods: Here we made use of audio-visual aids and CALI for teaching reading skill to a group of 40 students of special needs of both sexes (range between 8 and 18 years old) at al-Malādh school for teaching students of special needs in Dhamar (Yemen) while another group of the same number is taught using ordinary teaching methods. Pre-and-posttests have been administered at the beginning and the end of the semester (Before and after teaching the reading course). The purpose was to understand the differences between the levels of the students of special needs to see to what extent audio-visual aids and CALI are useful for them. The two groups were taught by the same instructor under the same circumstances in the same school. Both quantitative and qualitative procedures were used to analyze the data. Results: The overall findings revealed that audio-visual aids and CALI are very useful for teaching reading to students of special needs and this can be seen in the scores of the treatment group’s subjects (7.0%, in post-test vs.2.5% in pre-test). In comparison to the scores of the second group’s subjects (where audio-visual aids and CALI were not used) (2.2% in both pre-and-posttests), the first group subjects have overcome reading tasks and this can be observed in their performance in the posttest. Compared with males, females’ performance was better (1466 scores (7.3%) vs. 1371 scores (6.8%). Qualitative and statistical analyses showed that such comprehension is absolutely due to the use of audio-visual aids and CALI and nothing else. These outcomes confirm the evidence of the significance of using audio-visual aids and CALI as effective means for teaching receptive skills in general and reading skill in particular. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=reading" title="reading">reading</a>, <a href="https://publications.waset.org/abstracts/search?q=receptive%20skills" title=" receptive skills"> receptive skills</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20aids" title=" audio-visual aids"> audio-visual aids</a>, <a href="https://publications.waset.org/abstracts/search?q=CALI" title=" CALI"> CALI</a>, <a href="https://publications.waset.org/abstracts/search?q=students" title=" students"> students</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20needs" title=" special needs"> special needs</a>, <a href="https://publications.waset.org/abstracts/search?q=SLTs" title=" SLTs"> SLTs</a> </p> <a href="https://publications.waset.org/abstracts/186624/using-audio-visual-aids-and-computer-assisted-language-instruction-to-overcome-learning-difficulties-of-reading-in-students-of-special-needs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186624.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">49</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8001</span> Drone Classification Using Classification Methods Using Conventional Model With Embedded Audio-Visual Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hrishi%20Rakshit">Hrishi Rakshit</a>, <a href="https://publications.waset.org/abstracts/search?q=Pooneh%20Bagheri%20Zadeh"> Pooneh Bagheri Zadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the performance of drone classification methods using conventional DCNN with different hyperparameters, when additional drone audio data is embedded in the dataset for training and further classification. In this paper, first a custom dataset is created using different images of drones from University of South California (USC) datasets and Leeds Beckett university datasets with embedded drone audio signal. The three well-known DCNN architectures namely, Resnet50, Darknet53 and Shufflenet are employed over the created dataset tuning their hyperparameters such as, learning rates, maximum epochs, Mini Batch size with different optimizers. Precision-Recall curves and F1 Scores-Threshold curves are used to evaluate the performance of the named classification algorithms. Experimental results show that Resnet50 has the highest efficiency compared to other DCNN methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drone%20classifications" title="drone classifications">drone classifications</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20convolutional%20neural%20network" title=" deep convolutional neural network"> deep convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperparameters" title=" hyperparameters"> hyperparameters</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20audio%20signal" title=" drone audio signal"> drone audio signal</a> </p> <a href="https://publications.waset.org/abstracts/172929/drone-classification-using-classification-methods-using-conventional-model-with-embedded-audio-visual-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172929.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8000</span> Musical Tesla Coil with Faraday Box Controlled by a GNU Radio</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jairo%20Vega">Jairo Vega</a>, <a href="https://publications.waset.org/abstracts/search?q=Fabian%20Chamba"> Fabian Chamba</a>, <a href="https://publications.waset.org/abstracts/search?q=Jordy%20Urgiles"> Jordy Urgiles</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, the implementation of a Matlabcontrolled Musical Tesla Coil and external audio signals was presented. First, the audio signal was obtained from a mobile device and processed in Matlab to modify it, adding noise or other desired effects. Then, the processed signal was passed through a preamplifier to increase its amplitude to a level suitable for further amplification through a power amplifier, which was part of the current driver circuit of the Tesla coil. To get the Tesla coil to generate music, a circuit capable of modulating and generating the audio signal by manipulating electrical discharges was used. To visualize and listen to these discharges, a small Faraday cage was built to attenuate the external electric fields. Finally, the implementation of the musical Tesla coil was concluded. However, it was observed that the audio signal volume was very low, and the components used heated up quickly. Due to these limitations, it was determined that the project could not be connected to power for long periods of time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tesla%20coil" title="Tesla coil">Tesla coil</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma" title=" plasma"> plasma</a>, <a href="https://publications.waset.org/abstracts/search?q=electrical%20signals" title=" electrical signals"> electrical signals</a>, <a href="https://publications.waset.org/abstracts/search?q=GNU%20Radio" title=" GNU Radio"> GNU Radio</a> </p> <a href="https://publications.waset.org/abstracts/170861/musical-tesla-coil-with-faraday-box-controlled-by-a-gnu-radio" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170861.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">97</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7999</span> Digital Musical Organology: The Audio Games: The Question of “A-Musicological” Interfaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Herv%C3%A9%20Z%C3%A9nouda">Hervé Zénouda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article seeks to shed light on an emerging creative field: "Audio games," at the crossroads between video games and computer music. Indeed, many applications, which propose entertaining audio-visual experiences with the objective of musical creation, are available today for different supports (game consoles, computers, cell phones). The originality of this field is the use of the gameplay of video games applied to music composition. Thus, composing music using interfaces but also cognitive logics that we qualify as "a-musicological" seem to us particularly interesting from the perspective of musical digital organology. This field raises questions about the representation of sound and musical structures and develops new instrumental gestures and strategies of musical composition. We will try in this article to define the characteristics of this field by highlighting some historical milestones (abstract cinema, game theory in music, actions, and graphic scores) as well as the novelties brought by digital technologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio-games" title="audio-games">audio-games</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20generated%20music" title=" computer generated music"> computer generated music</a>, <a href="https://publications.waset.org/abstracts/search?q=gameplay" title=" gameplay"> gameplay</a>, <a href="https://publications.waset.org/abstracts/search?q=interactivity" title=" interactivity"> interactivity</a>, <a href="https://publications.waset.org/abstracts/search?q=synesthesia" title=" synesthesia"> synesthesia</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20interfaces" title=" sound interfaces"> sound interfaces</a>, <a href="https://publications.waset.org/abstracts/search?q=relationships%20image%2Fsound" title=" relationships image/sound"> relationships image/sound</a>, <a href="https://publications.waset.org/abstracts/search?q=audiovisual%20music" title=" audiovisual music"> audiovisual music</a> </p> <a href="https://publications.waset.org/abstracts/152518/digital-musical-organology-the-audio-games-the-question-of-a-musicological-interfaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7998</span> Using Audio-Visual Aids and Computer-Assisted Language Instruction to Overcome Learning Difficulties of Sound System in Students of Special Needs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadeq%20Al%20Yaari">Sadeq Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20Al%20Yaari"> Ayman Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Adham%20Al%20Yaari"> Adham Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Montaha%20Al%20Yaari"> Montaha Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Aayah%20Al%20Yaari"> Aayah Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajedah%20Al%20Yaari"> Sajedah Al Yaari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background & Objectives: Audio-visual aids and computer-assisted language instruction (CALI) effects are strong in teaching language components (sound system, grammatical structures and vocabulary) to students of special needs. To explore the effects of the audio-visual aids and CALI in teaching sound system to this class of students by speech language therapists (SLTs), an experiment has been undertaken to evaluate their performance during their study of the sound system course. Methods: Forty students (males and females) of special needs at al-Malādh school for teaching students of special needs in Dhamar (Yemen) range between 8 and 18 years old underwent this experimental study while they were studying language sound system course. Pre-and-posttests have been administered at the begging and end of the semester. Students' treatment was compared to a similar group (control group) of the same number under the same environment. Whereas the first group was taught using audio-visual aids and CALI, the second was not. Students' performances were linguistically and statistically evaluated. Results & conclusions: Compared with the control group, the treatment group showed significantly higher scores in the posttest (72.32% vs. 31%). Compared with females, males scored higher marks (1421 vs. 1472). Thus, we should take the audio-visual aids and CALI into consideration in teaching sound system to students of special needs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=language%20components" title="language components">language components</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20system" title=" sound system"> sound system</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20aids" title=" audio-visual aids"> audio-visual aids</a>, <a href="https://publications.waset.org/abstracts/search?q=CALI" title=" CALI"> CALI</a>, <a href="https://publications.waset.org/abstracts/search?q=students" title=" students"> students</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20needs" title=" special needs"> special needs</a>, <a href="https://publications.waset.org/abstracts/search?q=SLTs" title=" SLTs"> SLTs</a> </p> <a href="https://publications.waset.org/abstracts/186619/using-audio-visual-aids-and-computer-assisted-language-instruction-to-overcome-learning-difficulties-of-sound-system-in-students-of-special-needs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186619.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">46</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7997</span> On Musical Information Geometry with Applications to Sonified Image Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shannon%20Steinmetz">Shannon Steinmetz</a>, <a href="https://publications.waset.org/abstracts/search?q=Ellen%20Gethner"> Ellen Gethner</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a theoretical foundation is developed for patterned segmentation of audio using the geometry of music and statistical manifold. We demonstrate image content clustering using conic space sonification. The algorithm takes a geodesic curve as a model estimator of the three-parameter Gamma distribution. The random variable is parameterized by musical centricity and centric velocity. Model parameters predict audio segmentation in the form of duration and frame count based on the likelihood of musical geometry transition. We provide an example using a database of randomly selected images, resulting in statistically significant clusters of similar image content. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sonification" title="sonification">sonification</a>, <a href="https://publications.waset.org/abstracts/search?q=musical%20information%20geometry" title=" musical information geometry"> musical information geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20extraction" title=" content extraction"> content extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=automated%20quantification" title=" automated quantification"> automated quantification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20segmentation" title=" audio segmentation"> audio segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/133600/on-musical-information-geometry-with-applications-to-sonified-image-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133600.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">237</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7996</span> Audio-Lingual Method and the English-Speaking Proficiency of Grade 11 Students</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marthadale%20Acibo%20Semacio">Marthadale Acibo Semacio</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speaking skill is a crucial part of English language teaching and learning. This actually shows the great importance of this skill in English language classes. Through speaking, ideas and thoughts are shared with other people, and a smooth interaction between people takes place. The study examined the levels of speaking proficiency of the control and experimental groups on pronunciation, grammatical accuracy, and fluency. As a quasi-experimental study, it also determined the presence or absence of significant changes in their speaking proficiency levels in terms of pronouncing the words correctly, the accuracy of grammar and fluency of a language given the two methods to the groups of students in the English language, using the traditional and audio-lingual methods. Descriptive and inferential statistics were employed according to the stated specific problems. The study employed a video presentation with prior information about it. In the video, the teacher acts as model one, giving instructions on what is going to be done, and then the students will perform the activity. The students were paired purposively based on their learning capabilities. Observing proper ethics, their performance was audio recorded to help the researcher assess the learner using the modified speaking rubric. The study revealed that those under the traditional method were more fluent than those in the audio-lingual method. With respect to the way in which each method deals with the feelings of the student, the audio-lingual one fails to provide a principle that would relate to this area and follows the assumption that the intrinsic motivation of the students to learn the target language will spring from their interest in the structure of the language. However, the speaking proficiency levels of the students were remarkably reinforced in reading different words through the aid of aural media with their teachers. The study concluded that using an audio-lingual method of teaching is not a stand-alone method but only an aid of the teacher in helping the students improve their speaking proficiency in the English Language. Hence, audio-lingual approach is encouraged to be used in teaching English language, on top of the chalk-talk or traditional method, to improve the speaking proficiency of students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio-lingual" title="audio-lingual">audio-lingual</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking" title=" speaking"> speaking</a>, <a href="https://publications.waset.org/abstracts/search?q=grammar" title=" grammar"> grammar</a>, <a href="https://publications.waset.org/abstracts/search?q=pronunciation" title=" pronunciation"> pronunciation</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=fluency" title=" fluency"> fluency</a>, <a href="https://publications.waset.org/abstracts/search?q=proficiency" title=" proficiency"> proficiency</a> </p> <a href="https://publications.waset.org/abstracts/161963/audio-lingual-method-and-the-english-speaking-proficiency-of-grade-11-students" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=267">267</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=268">268</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=influence%20of%20audio&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>