CINXE.COM

Search results for: percussive sounds

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: percussive sounds</title> <meta name="description" content="Search results for: percussive sounds"> <meta name="keywords" content="percussive sounds"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="percussive sounds" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="percussive sounds"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 182</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: percussive sounds</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">182</span> A Combined Feature Extraction and Thresholding Technique for Silence Removal in Percussive Sounds </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Kishore%20Kumar">B. Kishore Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Pogula%20Rakesh"> Pogula Rakesh</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Kishore%20Kumar"> T. Kishore Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The music analysis is a part of the audio content analysis used to analyze the music by using the different features of audio signal. In music analysis, the first step is to divide the music signal to different sections based on the feature profiles of the music signal. In this paper, we present a music segmentation technique that will effectively segmentize the signal and thresholding technique to remove silence from the percussive sounds produced by percussive instruments, which uses two features of music, namely signal energy and spectral centroid. The proposed method impose thresholds on both the features which will vary depends on the music signal. Depends on the threshold, silence part is removed and the segmentation is done. The effectiveness of the proposed method is analyzed using MATLAB. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=percussive%20sounds" title="percussive sounds">percussive sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20centroid" title=" spectral centroid"> spectral centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20energy" title=" spectral energy"> spectral energy</a>, <a href="https://publications.waset.org/abstracts/search?q=silence%20removal" title=" silence removal"> silence removal</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/25510/a-combined-feature-extraction-and-thresholding-technique-for-silence-removal-in-percussive-sounds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25510.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">593</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">181</span> A Novel Method for Silence Removal in Sounds Produced by Percussive Instruments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Kishore%20Kumar">B. Kishore Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Rakesh%20Pogula"> Rakesh Pogula</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Kishore%20Kumar"> T. Kishore Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The steepness of an audio signal which is produced by the musical instruments, specifically percussive instruments is the perception of how high tone or low tone which can be considered as a frequency closely related to the fundamental frequency. This paper presents a novel method for silence removal and segmentation of music signals produced by the percussive instruments and the performance of proposed method is studied with the help of MATLAB simulations. This method is based on two simple features, namely the signal energy and the spectral centroid. As long as the feature sequences are extracted, a simple thresholding criterion is applied in order to remove the silence areas in the sound signal. The simulations were carried on various instruments like drum, flute and guitar and results of the proposed method were analyzed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=percussive%20instruments" title="percussive instruments">percussive instruments</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20energy" title=" spectral energy"> spectral energy</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20centroid" title=" spectral centroid"> spectral centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=silence%20removal" title=" silence removal"> silence removal</a> </p> <a href="https://publications.waset.org/abstracts/14246/a-novel-method-for-silence-removal-in-sounds-produced-by-percussive-instruments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14246.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">180</span> Difficulties in Pronouncing the English Bilabial Plosive Sounds among EFL Students</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Mohammed%20Saleh%20Al-Hamzi">Ali Mohammed Saleh Al-Hamzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims at finding out the most difficult position in pronouncing the bilabial plosive sounds at the fourth level of English foreign language students of the Faculty of Education, Mahweet, Sana’a University in Yemen. The subject of this study were 50 participants from English foreign language students aged 22-25. In describing sounds according to their place of articulation, sounds are classified as bilabial, labiodental, dental, alveolar, post-alveolar, palato-alveolar retroflex, palatal, velar, uvular, and glottal. In much the same way, sounds can be described in their manner of articulation as plosives, nasals, affricates, flaps, taps, rolls, fricatives, laterals, frictionless continuants, and semi-vowels. For English foreign language students in Yemen, there are some articulators that are difficult to pronounce. In this study, the researcher focuses on difficulties in pronouncing the English bilabial plosive sounds among English foreign language students. It can be in the initial, medial, and final positions. The problem discussed in this study was: which position is the most difficult in pronouncing the English bilabial plosive sounds? To solve the problem, a descriptive qualitative method was conducted in this study. The data were collected from each English bilabial plosive sounds produced by students. Finally, the researcher reached that the most difficult position in pronouncing the English bilabial plosive sounds is when English bilabial plosive /p/ and /b/ occur word-finally, where both are voiceless. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=difficulty" title="difficulty">difficulty</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL%20students%E2%80%99%20pronunciation" title=" EFL students’ pronunciation"> EFL students’ pronunciation</a>, <a href="https://publications.waset.org/abstracts/search?q=bilabial%20sounds" title=" bilabial sounds"> bilabial sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=plosive%20sounds" title=" plosive sounds"> plosive sounds</a> </p> <a href="https://publications.waset.org/abstracts/128142/difficulties-in-pronouncing-the-english-bilabial-plosive-sounds-among-efl-students" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">179</span> Heart Murmurs and Heart Sounds Extraction Using an Algorithm Process Separation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatima%20Mokeddem">Fatima Mokeddem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The phonocardiogram signal (PCG) is a physiological signal that reflects heart mechanical activity, is a promising tool for curious researchers in this field because it is full of indications and useful information for medical diagnosis. PCG segmentation is a basic step to benefit from this signal. Therefore, this paper presents an algorithm that serves the separation of heart sounds and heart murmurs in case they exist in order to use them in several applications and heart sounds analysis. The separation process presents here is founded on three essential steps filtering, envelope detection, and heart sounds segmentation. The algorithm separates the PCG signal into S1 and S2 and extract cardiac murmurs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phonocardiogram%20signal" title="phonocardiogram signal">phonocardiogram signal</a>, <a href="https://publications.waset.org/abstracts/search?q=filtering" title=" filtering"> filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=Envelope" title=" Envelope"> Envelope</a>, <a href="https://publications.waset.org/abstracts/search?q=Detection" title=" Detection"> Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=murmurs" title=" murmurs"> murmurs</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20sounds" title=" heart sounds"> heart sounds</a> </p> <a href="https://publications.waset.org/abstracts/114970/heart-murmurs-and-heart-sounds-extraction-using-an-algorithm-process-separation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114970.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">178</span> Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Addinell">S. J. Addinell</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Richard"> T. Richard</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Evans"> B. Evans</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cuttings%20characterization" title="cuttings characterization">cuttings characterization</a>, <a href="https://publications.waset.org/abstracts/search?q=drilling%20optimization" title=" drilling optimization"> drilling optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=oscillation%20frequency" title=" oscillation frequency"> oscillation frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=percussive%20drilling" title=" percussive drilling"> percussive drilling</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20analysis" title=" spectral analysis"> spectral analysis</a> </p> <a href="https://publications.waset.org/abstracts/59480/parameter-selection-and-monitoring-for-water-powered-percussive-drilling-in-green-fields-mineral-exploration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59480.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">177</span> Investigating Underground Explosion-Like Sounds in Sarableh City and Its Possible Connection with Geological Hazards</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hosein%20Almasikia">Hosein Almasikia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sarableh City is located in the west of Iran and in the seismic zone of Zagros. After the Azgole-Sarpol Zahab earthquake with a magnitude of 3.7 Richter on November 21, 2016, in some parts of Sarableh city, horrible sounds were heard by people. There is also a sound similar to the wear of the mill by some of the residents. Vibration studies and field investigations showed that these sounds have a geological origin and are emitted from the ground to the surface and may be related to geological hazards such as landslides, collapse of karstic zones, etc. In this study, an attempt has been made to investigate the possible relationship between these abnormal sounds and geological hazards. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sarable" title="Sarable">Sarable</a>, <a href="https://publications.waset.org/abstracts/search?q=Zagros" title=" Zagros"> Zagros</a>, <a href="https://publications.waset.org/abstracts/search?q=landslide" title=" landslide"> landslide</a>, <a href="https://publications.waset.org/abstracts/search?q=karstic%20zone" title=" karstic zone"> karstic zone</a> </p> <a href="https://publications.waset.org/abstracts/173601/investigating-underground-explosion-like-sounds-in-sarableh-city-and-its-possible-connection-with-geological-hazards" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173601.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">176</span> The Influence of Music Education and the Order of Sounds on the Grouping of Sounds into Sequences of Six Tones</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adam%20Rosi%C5%84ski">Adam Rosiński</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses an experiment conducted with two groups of participants, composed of musicians and non-musicians, in order to investigate the impact of the speed of a sound sequence and the order of sounds on the grouping of sounds into sequences of six tones. Significant differences were observed between musicians and non-musicians with respect to the threshold sequence speed at which the sequence was split into two streams. The differences in the results for the two groups suggest that the musical education of the participating listeners may be a vital factor. The criterion of musical education should be taken into account during experiments so that the results obtained are reliable, uniform, and free from interpretive errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auditory%20scene%20analysis" title="auditory scene analysis">auditory scene analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing" title=" hearing"> hearing</a>, <a href="https://publications.waset.org/abstracts/search?q=psychoacoustics" title=" psychoacoustics"> psychoacoustics</a> </p> <a href="https://publications.waset.org/abstracts/158683/the-influence-of-music-education-and-the-order-of-sounds-on-the-grouping-of-sounds-into-sequences-of-six-tones" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">175</span> Development of Sound Tactile Interface by Use of Human Sensation of Stiffness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Doi">K. Doi</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Nishimura"> T. Nishimura</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Umeda"> M. Umeda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are very few sound interfaces that both healthy people and hearing handicapped people can use to play together. In this study, we developed a sound tactile interface that makes use of the human sensation of stiffness. The interface comprises eight elastic objects having varying degrees of stiffness. Each elastic object is shaped like a column. When people with and without hearing disabilities press each elastic object, different sounds are produced depending on the stiffness of the elastic object. The types of sounds used were “Do Re Mi sounds.” The interface has a major advantage in that people with or without hearing disabilities can play with it. We found that users were able to recognize the hardness sensation and relate it to the corresponding Do Re Mi sounds. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tactile%20sense" title="tactile sense">tactile sense</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20interface" title=" sound interface"> sound interface</a>, <a href="https://publications.waset.org/abstracts/search?q=stiffness%20perception" title=" stiffness perception"> stiffness perception</a>, <a href="https://publications.waset.org/abstracts/search?q=elastic%20object" title=" elastic object"> elastic object</a> </p> <a href="https://publications.waset.org/abstracts/13051/development-of-sound-tactile-interface-by-use-of-human-sensation-of-stiffness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">174</span> Design of a Real Time Heart Sounds Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omer%20Abdalla%20Ishag">Omer Abdalla Ishag</a>, <a href="https://publications.waset.org/abstracts/search?q=Magdi%20Baker%20Amien"> Magdi Baker Amien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Physicians used the stethoscope for listening patient heart sounds in order to make a diagnosis. However, the determination of heart conditions by acoustic stethoscope is a difficult task so it requires special training of medical staff. This study developed an accurate model for analyzing the phonocardiograph signal based on PC and DSP processor. The system has been realized into two phases; offline and real time phase. In offline phase, 30 cases of heart sounds files were collected from medical students and doctor's world website. For experimental phase (real time), an electronic stethoscope has been designed, implemented and recorded signals from 30 volunteers, 17 were normal cases and 13 were various pathologies cases, these acquired 30 signals were preprocessed using an adaptive filter to remove lung sounds. The background noise has been removed from both offline and real data, using wavelet transform, then graphical and statistics features vector elements were extracted, finally a look-up table was used for classification heart sounds cases. The obtained results of the implemented system showed accuracy of 90%, 80% and sensitivity of 87.5%, 82.4% for offline data, and real data respectively. The whole system has been designed on TMS320VC5509a DSP Platform. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=code%20composer%20studio" title="code composer studio">code composer studio</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20sounds" title=" heart sounds"> heart sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=phonocardiograph" title=" phonocardiograph"> phonocardiograph</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20transform" title=" wavelet transform"> wavelet transform</a> </p> <a href="https://publications.waset.org/abstracts/37634/design-of-a-real-time-heart-sounds-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37634.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">173</span> Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Addinell">S. J. Addinell</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20F.%20Grabsch"> A. F. Grabsch</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20D.%20Fawell"> P. D. Fawell</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Evans"> B. Evans</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cuttings" title="cuttings">cuttings</a>, <a href="https://publications.waset.org/abstracts/search?q=dewatering" title=" dewatering"> dewatering</a>, <a href="https://publications.waset.org/abstracts/search?q=flocculation" title=" flocculation"> flocculation</a>, <a href="https://publications.waset.org/abstracts/search?q=percussive%20drilling" title=" percussive drilling"> percussive drilling</a>, <a href="https://publications.waset.org/abstracts/search?q=solids%20control" title=" solids control"> solids control</a> </p> <a href="https://publications.waset.org/abstracts/59463/optimizing-solids-control-and-cuttings-dewatering-for-water-powered-percussive-drilling-in-mineral-exploration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59463.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">172</span> Robust Heart Sounds Segmentation Based on the Variation of the Phonocardiogram Curve Length</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mecheri%20Zeid%20Belmecheri">Mecheri Zeid Belmecheri</a>, <a href="https://publications.waset.org/abstracts/search?q=Maamar%20Ahfir"> Maamar Ahfir</a>, <a href="https://publications.waset.org/abstracts/search?q=Izzet%20Kale"> Izzet Kale</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic cardiac auscultation is still a subject of research in order to establish an objective diagnosis. Recorded heart sounds as Phonocardiogram signals (PCG) can be used for automatic segmentation into components that have clinical meanings. These are the first sound, S1, the second sound, S2, and the systolic and diastolic components, respectively. In this paper, an automatic method is proposed for the robust segmentation of heart sounds. This method is based on calculating an intermediate sawtooth-shaped signal from the length variation of the recorded Phonocardiogram (PCG) signal in the time domain and, using its positive derivative function that is a binary signal in training a Recurrent Neural Network (RNN). Results obtained in the context of a large database of recorded PCGs with their simultaneously recorded ElectroCardioGrams (ECGs) from different patients in clinical settings, including normal and abnormal subjects, show a segmentation testing performance average of 76 % sensitivity and 94 % specificity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heart%20sounds" title="heart sounds">heart sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=PCG%20segmentation" title=" PCG segmentation"> PCG segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=event%20detection" title=" event detection"> event detection</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20networks" title=" recurrent neural networks"> recurrent neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=PCG%20curve%20length" title=" PCG curve length"> PCG curve length</a> </p> <a href="https://publications.waset.org/abstracts/157289/robust-heart-sounds-segmentation-based-on-the-variation-of-the-phonocardiogram-curve-length" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">171</span> Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jia%20Xin%20Low">Jia Xin Low</a>, <a href="https://publications.waset.org/abstracts/search?q=Keng%20Wah%20Choo"> Keng Wah Choo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification" title=" heart sound classification"> heart sound classification</a> </p> <a href="https://publications.waset.org/abstracts/85039/automatic-classification-of-periodic-heart-sounds-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">170</span> Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joao%20Victor%20Borges%20Dos%20Santos">Joao Victor Borges Dos Santos</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Richard"> Thomas Richard</a>, <a href="https://publications.waset.org/abstracts/search?q=Yevhen%20Kovalyshen"> Yevhen Kovalyshen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optimum%20drilling%20state" title="optimum drilling state">optimum drilling state</a>, <a href="https://publications.waset.org/abstracts/search?q=experimental%20investigation" title=" experimental investigation"> experimental investigation</a>, <a href="https://publications.waset.org/abstracts/search?q=field%20experiments" title=" field experiments"> field experiments</a>, <a href="https://publications.waset.org/abstracts/search?q=laboratory%20experiments" title=" laboratory experiments"> laboratory experiments</a>, <a href="https://publications.waset.org/abstracts/search?q=down-the-hole%20percussive%20drilling" title=" down-the-hole percussive drilling"> down-the-hole percussive drilling</a> </p> <a href="https://publications.waset.org/abstracts/159985/optimum-drilling-states-in-down-the-hole-percussive-drilling-an-experimental-investigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159985.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">169</span> From the “Movement Language” to Communication Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahmudjon%20Kuchkarov">Mahmudjon Kuchkarov</a>, <a href="https://publications.waset.org/abstracts/search?q=Marufjon%20Kuchkarov"> Marufjon Kuchkarov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The origin of ‘Human Language’ is still a secret and the most interesting subject of historical linguistics. The core element is the nature of labeling or coding the things or processes with symbols and sounds. In this paper, we investigate human’s involuntary Paired Sounds and Shape Production (PSSP) and its contribution to the development of early human communication. Aimed at twenty-six volunteers who provided many physical movements with various difficulties, the research team investigated the natural, repeatable, and paired sounds and shape productions during human activities. The paper claims the involvement of Paired Sounds and Shape Production (PSSP) in the phonetic origin of some modern words and the existence of similarities between elements of PSSP with characters of the classic Latin alphabet. The results may be used not only as a supporting idea for existing theories but to create a closer look at some fundamental nature of the origin of the languages as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=body%20shape" title="body shape">body shape</a>, <a href="https://publications.waset.org/abstracts/search?q=body%20language" title=" body language"> body language</a>, <a href="https://publications.waset.org/abstracts/search?q=coding" title=" coding"> coding</a>, <a href="https://publications.waset.org/abstracts/search?q=Latin%20alphabet" title=" Latin alphabet"> Latin alphabet</a>, <a href="https://publications.waset.org/abstracts/search?q=merging%20method" title=" merging method"> merging method</a>, <a href="https://publications.waset.org/abstracts/search?q=movement%20language" title=" movement language"> movement language</a>, <a href="https://publications.waset.org/abstracts/search?q=movement%20sound" title=" movement sound"> movement sound</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20sound" title=" natural sound"> natural sound</a>, <a href="https://publications.waset.org/abstracts/search?q=origin%20of%20language" title=" origin of language"> origin of language</a>, <a href="https://publications.waset.org/abstracts/search?q=pairing" title=" pairing"> pairing</a>, <a href="https://publications.waset.org/abstracts/search?q=phonetics" title=" phonetics"> phonetics</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20and%20shape%20production" title=" sound and shape production"> sound and shape production</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20origin" title=" word origin"> word origin</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20semantic" title=" word semantic"> word semantic</a> </p> <a href="https://publications.waset.org/abstracts/160314/from-the-movement-language-to-communication-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160314.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">168</span> The Effect of the Pronunciation of Emphatic Sounds on Perceived Masculinity/Femininity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Sayyour">M. Sayyour</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Abdulkareem"> M. Abdulkareem</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20Osman"> O. Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Salmeh"> S. Salmeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Emphatic sounds in Arabic are /tˤ/, /sˤ/, /dˤ/, and /ðˤ/. They involve a secondary articulation in the pharynx area as opposed to their counterparts: /t/,/s/,/d/and /ð/. Although they are present in most Arabic dialects, some dialects have lost this class as a historical development, such as Maltese Arabic. It has been found that there is a difference in the pronunciation of these emphatic sounds between the two genders, arguing that males tend to produce more evident emphasis than females. This study builds on these studies by trying to investigate whether listeners perceive fully emphatic sounds as more masculine and less emphatic sounds as more feminine. Furthermore, the study aims to find out which is more important in this perception process: the emphatic consonant itself or the vowel following it. To test this, natural and manipulated tokens of two male and two female speakers were used. The natural tokens include words that have emphatic consonant and emphatic vowel and tokens that have plain consonant and plain vowel. The manipulated tokens include words that have emphatic consonant but central vowel and plain consonant followed by the same central vowel. These manipulated tokens allow us to see whether the consonant will still affect the perception even if the vowel is controlled. Another group of words that contained no emphatic sounds was used as a control group. The total number of tokens (natural, manipulated, and control) are 160 tokens. After that, 60 university students (30 males and 30 females) listened to these tokens and responded by choosing a specific character that they think is likely to produce each token. The characters’ descriptions are carefully written with two degrees of femininity and two degrees of masculinity. The preliminary results for the femininity level showed that the highest degree of femininity was for tokens that contain a plain consonant and a plain vowel. The lowest level of femininity was given for tokens that have fully emphatic consonant and vowel. For the manipulated tokens that contained plain consonant and central vowel, the femininity degree was high which indicates that the consonant is more important than the vowel, while for the manipulated tokens that contain emphatic consonant and a central vowel, the femininity level was higher than that for the tokens that have emphatic consonant and emphatic vowel, which indicates that the vowel is more important for the perception of emphatic consonants. These results are interpreted in light of feminist linguistic theories, linguistic expectations, performed gender and linguistic change theories. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emphatic%20sounds" title="Emphatic sounds">Emphatic sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=gender%20studies" title=" gender studies"> gender studies</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=sociophonetics" title=" sociophonetics"> sociophonetics</a> </p> <a href="https://publications.waset.org/abstracts/31361/the-effect-of-the-pronunciation-of-emphatic-sounds-on-perceived-masculinityfemininity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31361.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">167</span> Problems of Learning English Vowels Pronunciation in Nigeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wasila%20Lawan%20Gadanya">Wasila Lawan Gadanya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the problems of learning English vowel pronunciation. The objective is to identify some of the factors that affect the learning of English vowel sounds and their proper realization in words. The theoretical framework adopted is based on both error analysis and contrastive analysis. The data collection instruments used in the study are questionnaire and word list for the respondents (students) and observation of some of their lecturers. All the data collected were analyzed using simple percentage. The findings show that it is not a single factor that affects the learning of English vowel pronunciation rather many factors concurrently do so. Among the factors examined, it has been found that lack of correlation between English orthography and its pronunciation, not mother-tongue (which most people consider as a factor affecting learning of the pronunciation of a second language), has the greatest influence on students’ learning and realization of English vowel sounds since the respondents in this study are from different ethnic groups of Nigeria and thus speak different languages but having the same or almost the same problem when pronouncing the English vowel sounds. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=English%20vowels" title="English vowels">English vowels</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Nigeria" title=" Nigeria"> Nigeria</a>, <a href="https://publications.waset.org/abstracts/search?q=pronunciation" title=" pronunciation"> pronunciation</a> </p> <a href="https://publications.waset.org/abstracts/40169/problems-of-learning-english-vowels-pronunciation-in-nigeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40169.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">451</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">166</span> A Case Study Using Sounds Write and The Writing Revolution to Support Students with Literacy Difficulties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emilie%20Zimet">Emilie Zimet</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During our department meetings for teachers of children with learning disabilities and difficulties, we often discuss the best practices for supporting students who come to school with literacy difficulties. After completing Sounds Write and Writing Revolution courses, it seems there is a possibility to link approaches and still maintain fidelity to a program and provide individualised instruction to support students with such difficulties and disabilities. In this case study, the researcher has been focussing on how best to use the knowledge acquired to provide quality intervention that targets the varied areas of challenge that students require support in. Students present to school with a variety of co-occurring reading and writing deficits and with complementary approaches, such as The Writing Revolution and Sounds Write, it is possible to support students to improve their fundamental skills in these key areas. Over the next twelve weeks, the researcher will collect data on current students with whom this approach will be trialled and then compare growth with students from last year who received support using Sounds-Write only. Maintaining fidelity may be a potential challenge as each approach has been tested in a specific format for best results. The aim of this study is to determine if approaches can be combined, so the implementation will need to incorporate elements of both reading (from Sounds Write) and writing (from The Writing Revolution). A further challenge is the time length of each session (25 minutes), so the researcher will need to be creative in the use of time to ensure both writing and reading are targeted while ensuring the programs are implemented. The implementation will be documented using student work samples and planning documents. This work will include a display of findings using student learning samples to demonstrate the importance of co-targeting the reading and writing challenges students come to school with. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=literacy%20difficulties" title="literacy difficulties">literacy difficulties</a>, <a href="https://publications.waset.org/abstracts/search?q=intervention" title=" intervention"> intervention</a>, <a href="https://publications.waset.org/abstracts/search?q=individual%20differences" title=" individual differences"> individual differences</a>, <a href="https://publications.waset.org/abstracts/search?q=methods%20of%20provision" title=" methods of provision"> methods of provision</a> </p> <a href="https://publications.waset.org/abstracts/183364/a-case-study-using-sounds-write-and-the-writing-revolution-to-support-students-with-literacy-difficulties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183364.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">165</span> Sound Instance: Art, Perception and Composition through Soundscapes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Mestre">Ricardo Mestre</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The soundscape stands out as an agglomeration of sounds available in the world, associated with different contexts and origins, being a theme studied by various areas of knowledge, seeking to guide their benefits and their consequences, contributing to the welfare of society and other ecosystems. Murray Schafer, the author who originally developed this concept, highlights the need for a greater recognition of sound reality, through the selection and differentiation of sounds, contributing to a tuning of the world and to the balance and well-being of humanity. According to some authors sound environment, produced and created in various ways, provides various sources of information, contributing to the orientation of the human being, alerting and manipulating him during his daily journey, like small notifications received on a cell phone or other device with these features. In this way, it becomes possible to give sound its due importance in relation to the processes of individual representation, in manners of social, professional and emotional life. Ensuring an individual representation means providing the human being with new tools for the long process of reflection by recognizing his environment, the sounds that represent him, and his perspective on his respective function in it. In order to provide more information about the importance of the sound environment inherent to the individual reality, one introduces the term sound instance, in order to refer to the whole sound field existing in the individual's life, which is divided into four distinct subfields, but essential to the process of individual representation, called sound matrix, sound cycles, sound traces and sound interference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sound%20instance" title="sound instance">sound instance</a>, <a href="https://publications.waset.org/abstracts/search?q=soundscape" title=" soundscape"> soundscape</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20art" title=" sound art"> sound art</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=composition" title=" composition"> composition</a> </p> <a href="https://publications.waset.org/abstracts/155181/sound-instance-art-perception-and-composition-through-soundscapes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155181.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">164</span> Slice Bispectrogram Analysis-Based Classification of Environmental Sounds Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsumi%20Hirata">Katsumi Hirata</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Certain systems can function well only if they recognize the sound environment as humans do. In this research, we focus on sound classification by adopting a convolutional neural network and aim to develop a method that automatically classifies various environmental sounds. Although the neural network is a powerful technique, the performance depends on the type of input data. Therefore, we propose an approach via a slice bispectrogram, which is a third-order spectrogram and is a slice version of the amplitude for the short-time bispectrum. This paper explains the slice bispectrogram and discusses the effectiveness of the derived method by evaluating the experimental results using the ESC‑50 sound dataset. As a result, the proposed scheme gives high accuracy and stability. Furthermore, some relationship between the accuracy and non-Gaussianity of sound signals was confirmed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=environmental%20sound" title="environmental sound">environmental sound</a>, <a href="https://publications.waset.org/abstracts/search?q=bispectrum" title=" bispectrum"> bispectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=slice%20bispectrogram" title=" slice bispectrogram"> slice bispectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/114107/slice-bispectrogram-analysis-based-classification-of-environmental-sounds-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">163</span> Investigating the Pronunciation of &#039;-S&#039; and &#039;-Ed&#039; Suffixes in Yemeni English</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saif%20Bareq">Saif Bareq</a>, <a href="https://publications.waset.org/abstracts/search?q=Vivek%20Mirgane"> Vivek Mirgane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present paper seeks to explicate the pronunciation of the ‘-s’ and ‘-ed’ suffixes when applied in their relative places in word endings. It attempts to investigate the problems faced by Yemenis in the pronunciation of these suffixes in all occurrences and realizations. It discusses the realization of ‘s’ in the four areas of plural, 3rd person singular and genitive markers, and contraction of ‘has’ and ‘is’ as in he’s, it’s ..,etc. and shows how they are differently represented by three different sounds /s/, /z/ and /z/ based on the phonological structure of the words in which they occur. Similarly, it explains the realization of the ‘-ed’ suffix of the past and past participle marker and how it is realized differently by three sounds governed by the phonological structure of these words. Besides, it tries to shed some light on the English morphophonemic and phonological rules that govern the pronunciation of such troublesome endings. It is hypothesized that the absence of such phenomenon in the mother tongue pronunciation of these suffixes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Suffixes%27%20Pronunciation" title="Suffixes&#039; Pronunciation">Suffixes&#039; Pronunciation</a>, <a href="https://publications.waset.org/abstracts/search?q=Phonological%20Structure" title=" Phonological Structure"> Phonological Structure</a>, <a href="https://publications.waset.org/abstracts/search?q=Phonological%20Rules" title=" Phonological Rules"> Phonological Rules</a>, <a href="https://publications.waset.org/abstracts/search?q=Morpho-Phonemics" title=" Morpho-Phonemics"> Morpho-Phonemics</a>, <a href="https://publications.waset.org/abstracts/search?q=Yemeni%20English" title=" Yemeni English"> Yemeni English</a> </p> <a href="https://publications.waset.org/abstracts/66130/investigating-the-pronunciation-of-s-and-ed-suffixes-in-yemeni-english" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66130.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">162</span> Musical Instrument Recognition in Polyphonic Audio Through Convolutional Neural Networks and Spectrograms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rujia%20Chen">Rujia Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Akbar%20Ghobakhlou"> Akbar Ghobakhlou</a>, <a href="https://publications.waset.org/abstracts/search?q=Ajit%20Narayanan"> Ajit Narayanan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the task of identifying musical instruments in polyphonic compositions using Convolutional Neural Networks (CNNs) from spectrogram inputs, focusing on binary classification. The model showed promising results, with an accuracy of 97% on solo instrument recognition. When applied to polyphonic combinations of 1 to 10 instruments, the overall accuracy was 64%, reflecting the increasing challenge with larger ensembles. These findings contribute to the field of Music Information Retrieval (MIR) by highlighting the potential and limitations of current approaches in handling complex musical arrangements. Future work aims to include a broader range of musical sounds, including electronic and synthetic sounds, to improve the model's robustness and applicability in real-time MIR systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binary%20classifier" title="binary classifier">binary classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=instrument" title=" instrument"> instrument</a> </p> <a href="https://publications.waset.org/abstracts/185822/musical-instrument-recognition-in-polyphonic-audio-through-convolutional-neural-networks-and-spectrograms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">161</span> Sound Analysis of Young Broilers Reared under Different Stocking Densities in Intensive Poultry Farming</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoyang%20Zhao">Xiaoyang Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Kaiying%20Wang"> Kaiying Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The choice of stocking density in poultry farming is a potential way for determining welfare level of poultry. However, it is difficult to measure stocking densities in poultry farming because of a lot of variables such as species, age and weight, feeding way, house structure and geographical location in different broiler houses. A method was proposed in this paper to measure the differences of young broilers reared under different stocking densities by sound analysis. Vocalisations of broilers were recorded and analysed under different stocking densities to identify the relationship between sounds and stocking densities. Recordings were made continuously for three-week-old chickens in order to evaluate the variation of sounds emitted by the animals at the beginning. The experimental trial was carried out in an indoor reared broiler farm; the audio recording procedures lasted for 5 days. Broilers were divided into 5 groups, stocking density treatments were 8/m², 10/m², 12/m² (96birds/pen), 14/m² and 16/m², all conditions including ventilation and feed conditions were kept same except from stocking densities in every group. The recordings and analysis of sounds of chickens were made noninvasively. Sound recordings were manually analysed and labelled using sound analysis software: GoldWave Digital Audio Editor. After sound acquisition process, the Mel Frequency Cepstrum Coefficients (MFCC) was extracted from sound data, and the Support Vector Machine (SVM) was used as an early detector and classifier. This preliminary study, conducted in an indoor reared broiler farm shows that this method can be used to classify sounds of chickens under different densities economically (only a cheap microphone and recorder can be used), the classification accuracy is 85.7%. This method can predict the optimum stocking density of broilers with the complement of animal welfare indicators, animal productive indicators and so on. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=broiler" title="broiler">broiler</a>, <a href="https://publications.waset.org/abstracts/search?q=stocking%20density" title=" stocking density"> stocking density</a>, <a href="https://publications.waset.org/abstracts/search?q=poultry%20farming" title=" poultry farming"> poultry farming</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20monitoring" title=" sound monitoring"> sound monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel%20Frequency%20Cepstrum%20Coefficients%20%28MFCC%29" title=" Mel Frequency Cepstrum Coefficients (MFCC)"> Mel Frequency Cepstrum Coefficients (MFCC)</a>, <a href="https://publications.waset.org/abstracts/search?q=Support%20Vector%20Machine%20%28SVM%29" title=" Support Vector Machine (SVM)"> Support Vector Machine (SVM)</a> </p> <a href="https://publications.waset.org/abstracts/91996/sound-analysis-of-young-broilers-reared-under-different-stocking-densities-in-intensive-poultry-farming" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">160</span> Challenges of Teaching and Learning English Speech Sounds in Five Selected Secondary Schools in Bauchi, Bauchi State, Nigeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mairo%20Musa%20Galadima">Mairo Musa Galadima</a>, <a href="https://publications.waset.org/abstracts/search?q=Phoebe%20Mshelia"> Phoebe Mshelia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Nigeria, the national policy of education stipulates that the kindergarten primary schools and the legislature are to use the three popular Nigerian Languages namely: Hausa, Igbo and Yoruba. However, the English language seems to be preferred and this calls for this paper. Attempts were made to draw out the challenges faced by learners in understanding English speech sounds and using them to communicate effectively in English; using 5(five) selected secondary school in Bauchi. It was discover that challenges abound in the wrong use of stress and intonation, transfer of phonetic features from their first language. Others are inadequate qualified teachers and relevant materials including text-books. It is recommended that teachers of English should lay more emphasis on the teaching of supra-segmental features and should be encouraged to go for further studies, seminars and refresher courses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=kindergarten" title="kindergarten">kindergarten</a>, <a href="https://publications.waset.org/abstracts/search?q=stress" title=" stress"> stress</a>, <a href="https://publications.waset.org/abstracts/search?q=phonetic%20and%20intonation" title=" phonetic and intonation"> phonetic and intonation</a>, <a href="https://publications.waset.org/abstracts/search?q=Nigeria" title=" Nigeria"> Nigeria</a> </p> <a href="https://publications.waset.org/abstracts/15234/challenges-of-teaching-and-learning-english-speech-sounds-in-five-selected-secondary-schools-in-bauchi-bauchi-state-nigeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">159</span> Velocity Profiles of Vowel Perception by Javanese and Sundanese English Language Learners</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arum%20Perwitasari">Arum Perwitasari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learning L2 sounds is influenced by the first language (L1) sound system. This current study seeks to examine how the listeners with a different L1 vowel system perceive L2 sounds. The fact that English has a bigger number of vowel inventory than Javanese and Sundanese L1 might cause problems for Javanese and Sundanese English language learners perceiving English sounds. To reveal the L2 sound perception over time, we measured the mouse trajectories related to the hand movements made by Javanese and Sundanese language learners, two of Indonesian local languages. Do the Javanese and Sundanese listeners show higher velocity than the English listeners when they perceive English vowels which are similar and new to their L1 system? The study aims to map the patterns of real-time processing through compatible hand movements to reveal any uncertainties when making selections. The results showed that the Javanese listeners exhibited significantly slower velocity values than the English listeners for similar vowels /I, ɛ, ʊ/ in the 826-1200ms post stimulus. Unlike the Javanese, the Sundanese listeners showed slow velocity values except for similar vowel /ʊ/. For the perception of new vowels /i:, æ, ɜ:, ʌ, ɑː, u:, ɔ:/, the Javanese listeners showed slower velocity in making the lexical decision. In contrast, the Sundanese listeners showed slow velocity only for vowels /ɜ:, ɔ:, æ, I/ indicating that these vowels are hard to perceive. Our results fit well with the second language model representing how the L1 vowel system influences the L2 sound perception. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=velocity%20profiles" title="velocity profiles">velocity profiles</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL%20learners" title=" EFL learners"> EFL learners</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20perception" title=" speech perception"> speech perception</a>, <a href="https://publications.waset.org/abstracts/search?q=experimental%20linguistics" title=" experimental linguistics"> experimental linguistics</a> </p> <a href="https://publications.waset.org/abstracts/61163/velocity-profiles-of-vowel-perception-by-javanese-and-sundanese-english-language-learners" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61163.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">217</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">158</span> Challenges of Teaching and Learning English Speech Sounds in Five Selected Secondary Schools in Bauchi, Bauchi State, Nigeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mairo%20Musa%20Galadima">Mairo Musa Galadima</a>, <a href="https://publications.waset.org/abstracts/search?q=Phoebe%20Mshelia"> Phoebe Mshelia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Nigeria, the national policy of education stipulates that the kindergarten-primary schools and the legislature are to use the three popular Nigerian Languages namely: Hausa, Igbo, and Yoruba. However, the English language seems to be preferred and this calls for this paper. Attempts were made to draw out the challenges faced by learners in understanding English speech sounds and using them to communicate effectively in English; using 5 (five) selected secondary school in Bauchi. It was discovered that challenges abound in the wrong use of stress and intonation, transfer of phonetic features from their first language. Others are inadequately qualified teachers and relevant materials including textbooks. It is recommended that teachers of English should lay more emphasis on the teaching of supra-segmental features and should be encouraged to go for further studies, seminars and refresher courses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=stress%20and%20intonation" title="stress and intonation">stress and intonation</a>, <a href="https://publications.waset.org/abstracts/search?q=phonetic%20and%20challenges" title=" phonetic and challenges"> phonetic and challenges</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching%20and%20learning%20English" title=" teaching and learning English"> teaching and learning English</a>, <a href="https://publications.waset.org/abstracts/search?q=secondary%20schools" title=" secondary schools"> secondary schools</a> </p> <a href="https://publications.waset.org/abstracts/12519/challenges-of-teaching-and-learning-english-speech-sounds-in-five-selected-secondary-schools-in-bauchi-bauchi-state-nigeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">157</span> Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=George%20Zhou">George Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunchan%20Chen"> Yunchan Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Candace%20Chien"> Candace Chien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arteriovenous%20fistula" title="arteriovenous fistula">arteriovenous fistula</a>, <a href="https://publications.waset.org/abstracts/search?q=blood%20flow%20sounds" title=" blood flow sounds"> blood flow sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=metadata%20encoding" title=" metadata encoding"> metadata encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/163552/categorical-metadata-encoding-schemes-for-arteriovenous-fistula-blood-flow-sound-classification-scaling-numerical-representations-leads-to-improved-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">156</span> A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jing%20Wu">Jing Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Lv"> Wei Lv</a>, <a href="https://publications.waset.org/abstracts/search?q=Yibing%20Li"> Yibing Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanfan%20You"> Yuanfan You</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DBSCAN" title="DBSCAN">DBSCAN</a>, <a href="https://publications.waset.org/abstracts/search?q=potential%20function" title=" potential function"> potential function</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20signal" title=" speech signal"> speech signal</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20UBSS%20model" title=" the UBSS model"> the UBSS model</a> </p> <a href="https://publications.waset.org/abstracts/101455/a-mixing-matrix-estimation-algorithm-for-speech-signals-under-the-under-determined-blind-source-separation-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101455.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">135</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">155</span> Still Pictures for Learning Foreign Language Sounds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaoru%20Tomita">Kaoru Tomita</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study explores how visual information helps us to learn foreign language pronunciation. Visual assistance and its effect for learning foreign language have been discussed widely. For example, simplified illustrations in textbooks are used for telling learners which part of the articulation organs are used for pronouncing sounds. Vowels are put into a chart that depicts a vowel space. Consonants are put into a table that contains two axes of place and manner of articulation. When comparing a still picture and a moving picture for visualizing learners’ pronunciation, it becomes clear that the former works better than the latter. The visualization of vowels was applied to class activities in which native and non-native speakers’ English was compared and the learners’ feedback was collected: the positions of six vowels did not scatter as much as they were expected to do. Specifically, two vowels were not discriminated and were arranged very close in the vowel space. It was surprising for the author to find that learners liked analyzing their own pronunciation by linking formant ones and twos on a sheet of paper with a pencil. Even a simple method works well if it leads learners to think about their pronunciation analytically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feedback" title="feedback">feedback</a>, <a href="https://publications.waset.org/abstracts/search?q=pronunciation" title=" pronunciation"> pronunciation</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=vowel" title=" vowel"> vowel</a> </p> <a href="https://publications.waset.org/abstracts/50291/still-pictures-for-learning-foreign-language-sounds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50291.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">251</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">154</span> Development of the New York Misophonia Scale: Implications for Diagnostic Criteria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Usha%20Barahmand">Usha Barahmand</a>, <a href="https://publications.waset.org/abstracts/search?q=Maria%20Stalias"> Maria Stalias</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdul%20Haq"> Abdul Haq</a>, <a href="https://publications.waset.org/abstracts/search?q=Esther%20Rotlevi"> Esther Rotlevi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying%20Xiang"> Ying Xiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Misophonia is a condition in which specific repetitive oral, nasal, or other sounds and movements made by humans trigger impulsive aversive reactions of irritation or disgust that instantly become anger. A few measures exist for the assessment of misophonia, but each has some limitations, and evidence for a formal diagnosis is still lacking. The objective of this study was to develop a reliable and valid measure of misophonia for use in the general population. Adopting a purely descriptive approach, this study focused on developing a self-report measure using all triggers and reactions identified in previous studies on misophonia. A measure with two subscales, one assessing the aversive quality of various triggers and the other assessing reactions of individuals, was developed. Data were gathered from a large sample of both men and women ranging in age from 18 to 65 years. Exploratory factor analysis revealed three main triggers: oral/nasal sounds, hand and leg movements, and environmental sounds. Two clusters of reactions also emerged: nonangry attempts to avoid the impact of the aversive stimuli and angry attempts to stop the aversive stimuli. The examination of the psychometric properties of the scale revealed its internal consistency and test-retest reliability to be excellent. The scale was also found to have very good concurrent and convergent validity. Significant annoyance and disgust in response to the triggers were reported by 12% of the sample, although for some specific triggers, rates as high as 31% were also reported. These findings have implications for the delineation of the criteria for identifying misophonia as a clinical condition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adults" title="adults">adults</a>, <a href="https://publications.waset.org/abstracts/search?q=factor%20analysis" title=" factor analysis"> factor analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=misophonia" title=" misophonia"> misophonia</a>, <a href="https://publications.waset.org/abstracts/search?q=psychometric%20properties" title=" psychometric properties"> psychometric properties</a>, <a href="https://publications.waset.org/abstracts/search?q=scale" title=" scale"> scale</a> </p> <a href="https://publications.waset.org/abstracts/131254/development-of-the-new-york-misophonia-scale-implications-for-diagnostic-criteria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">207</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">153</span> Role of Speech Articulation in English Language Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadija%20Rafi">Khadija Rafi</a>, <a href="https://publications.waset.org/abstracts/search?q=Neha%20Jamil"> Neha Jamil</a>, <a href="https://publications.waset.org/abstracts/search?q=Laiba%20Khalid"> Laiba Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Meerub%20Nawaz"> Meerub Nawaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahwish%20Farooq"> Mahwish Farooq</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech articulation is a complex process to produce intelligible sounds with the help of precise movements of various structures within the vocal tract. All these structures in the vocal tract are named as articulators, which comprise lips, teeth, tongue, and palate. These articulators work together to produce a range of distinct phonemes, which happen to be the basis of language. It starts with the airstream from the lungs passing through the trachea and into oral and nasal cavities. When the air passes through the mouth, the tongue and the muscles around it form such coordination it creates certain sounds. It can be seen when the tongue is placed in different positions- sometimes near the alveolar ridge, soft palate, roof of the mouth or the back of the teeth which end up creating unique qualities of each phoneme. We can articulate vowels with open vocal tracts, but the height and position of the tongue is different every time depending upon each vowel, while consonants can be pronounced when we create obstructions in the airflow. For instance, the alphabet ‘b’ is a plosive and can be produced only by briefly closing the lips. Articulation disorders can not only affect communication but can also be a hurdle in speech production. To improve articulation skills for such individuals, doctors often recommend speech therapy, which involves various kinds of exercises like jaw exercises and tongue twisters. However, this disorder is more common in children who are going through developmental articulation issues right after birth, but in adults, it can be caused by injury, neurological conditions, or other speech-related disorders. In short, speech articulation is an essential aspect of productive communication, which also includes coordination of the specific articulators to produce different intelligible sounds, which are a vital part of spoken language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=linguistics" title="linguistics">linguistics</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20articulation" title=" speech articulation"> speech articulation</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20therapy" title=" speech therapy"> speech therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20learning" title=" language learning"> language learning</a> </p> <a href="https://publications.waset.org/abstracts/176220/role-of-speech-articulation-in-english-language-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176220.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">62</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=percussive%20sounds&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=percussive%20sounds&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=percussive%20sounds&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=percussive%20sounds&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=percussive%20sounds&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=percussive%20sounds&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=percussive%20sounds&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10