CINXE.COM
Search results for: heart sound classification
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: heart sound classification</title> <meta name="description" content="Search results for: heart sound classification"> <meta name="keywords" content="heart sound classification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="heart sound classification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="heart sound classification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4022</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: heart sound classification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4022</span> Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jia%20Xin%20Low">Jia Xin Low</a>, <a href="https://publications.waset.org/abstracts/search?q=Keng%20Wah%20Choo"> Keng Wah Choo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification" title=" heart sound classification"> heart sound classification</a> </p> <a href="https://publications.waset.org/abstracts/85039/automatic-classification-of-periodic-heart-sounds-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4021</span> Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Masood%20Khan">Nadia Masood Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Salman%20Khan"> Muhammad Salman Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Gul%20Muhammad%20Khan"> Gul Muhammad Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title="pattern recognition">pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20aided%20diagnosis" title="computer aided diagnosis">computer aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification" title=" heart sound classification"> heart sound classification</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20feature%20extraction" title=" and feature extraction"> and feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/95434/automated-heart-sound-classification-from-unsegmented-phonocardiogram-signals-using-time-frequency-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95434.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">263</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4020</span> Classification of Traffic Complex Acoustic Space</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bin%20Wang">Bin Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Kang"> Jian Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> After years of development, the study of soundscape has been refined to the types of urban space and building. Traffic complex takes traffic function as the core, with obvious design features of architectural space combination and traffic streamline. The acoustic environment is strongly characterized by function, space, material, user and other factors. Traffic complex integrates various functions of business, accommodation, entertainment and so on. It has various forms, complex and varied experiences, and its acoustic environment is turned rich and interesting with distribution and coordination of various functions, division and unification of the mass, separation and organization of different space and the cross and the integration of multiple traffic flow. In this study, it made field recordings of each space of various traffic complex, and extracted and analyzed different acoustic elements, including changes in sound pressure, frequency distribution, steady sound source, sound source information and other aspects, to make cluster analysis of each independent traffic complex buildings. It divided complicated traffic complex building space into several typical sound space from acoustic environment perspective, mainly including stable sound space, high-pressure sound space, rhythm sound space and upheaval sound space. This classification can further deepen the study of subjective evaluation and control of the acoustic environment of traffic complex. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=soundscape" title="soundscape">soundscape</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20complex" title=" traffic complex"> traffic complex</a>, <a href="https://publications.waset.org/abstracts/search?q=cluster%20analysis" title=" cluster analysis"> cluster analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/57017/classification-of-traffic-complex-acoustic-space" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4019</span> Slice Bispectrogram Analysis-Based Classification of Environmental Sounds Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsumi%20Hirata">Katsumi Hirata</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Certain systems can function well only if they recognize the sound environment as humans do. In this research, we focus on sound classification by adopting a convolutional neural network and aim to develop a method that automatically classifies various environmental sounds. Although the neural network is a powerful technique, the performance depends on the type of input data. Therefore, we propose an approach via a slice bispectrogram, which is a third-order spectrogram and is a slice version of the amplitude for the short-time bispectrum. This paper explains the slice bispectrogram and discusses the effectiveness of the derived method by evaluating the experimental results using the ESC‑50 sound dataset. As a result, the proposed scheme gives high accuracy and stability. Furthermore, some relationship between the accuracy and non-Gaussianity of sound signals was confirmed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=environmental%20sound" title="environmental sound">environmental sound</a>, <a href="https://publications.waset.org/abstracts/search?q=bispectrum" title=" bispectrum"> bispectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=slice%20bispectrogram" title=" slice bispectrogram"> slice bispectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/114107/slice-bispectrogram-analysis-based-classification-of-environmental-sounds-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4018</span> Wavelet-Based Classification of Myocardial Ischemia, Arrhythmia, Congestive Heart Failure and Sleep Apnea</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Santanu%20Chattopadhyay">Santanu Chattopadhyay</a>, <a href="https://publications.waset.org/abstracts/search?q=Gautam%20Sarkar"> Gautam Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabinda%20Das"> Arabinda Das</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents wavelet based classification of various heart diseases. Electrocardiogram signals of different heart patients have been studied. Statistical natures of electrocardiogram signals for different heart diseases have been compared with the statistical nature of electrocardiograms for normal persons. Under this study four different heart diseases have been considered as follows: Myocardial Ischemia (MI), Congestive Heart Failure (CHF), Arrhythmia and Sleep Apnea. Statistical nature of electrocardiograms for each case has been considered in terms of kurtosis values of two types of wavelet coefficients: approximate and detail. Nine wavelet decomposition levels have been considered in each case. Kurtosis corresponding to both approximate and detail coefficients has been considered for decomposition level one to decomposition level nine. Based on significant difference, few decomposition levels have been chosen and then used for classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arrhythmia" title="arrhythmia">arrhythmia</a>, <a href="https://publications.waset.org/abstracts/search?q=congestive%20heart%20failure" title=" congestive heart failure"> congestive heart failure</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=electrocardiogram" title=" electrocardiogram"> electrocardiogram</a>, <a href="https://publications.waset.org/abstracts/search?q=myocardial%20ischemia" title=" myocardial ischemia"> myocardial ischemia</a>, <a href="https://publications.waset.org/abstracts/search?q=sleep%20apnea" title=" sleep apnea"> sleep apnea</a> </p> <a href="https://publications.waset.org/abstracts/112333/wavelet-based-classification-of-myocardial-ischemia-arrhythmia-congestive-heart-failure-and-sleep-apnea" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112333.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4017</span> Android – Based Wireless Electronic Stethoscope</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aw%20Adi%20Arryansyah">Aw Adi Arryansyah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Using electronic stethoscope for detecting heartbeat sound, and breath sounds, are the effective way to investigate cardiovascular diseases. On the other side, technology is growing towards mobile. Almost everyone has a smartphone. Smartphone has many platforms. Creating mobile applications also became easier. We also can use HTML5 technology to creating mobile apps. Android is the most widely used type. This is the reason for us to make a wireless electronic stethoscope based on Android mobile. Android based Wireless Electronic Stethoscope designed by a simple system, uses sound sensors mounted membrane, then connected with Bluetooth module which will send the heart auscultation voice input data by Bluetooth signal to an android platform. On the software side, android will read the voice input then it will translate to beautiful visualization and release the voice output which can be regulated about how much of it is going to be released. We can change the heart beat sound into BPM data, and heart beat analysis, like normal beat, bradycardia or tachycardia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless" title="wireless">wireless</a>, <a href="https://publications.waset.org/abstracts/search?q=HTML%205" title=" HTML 5"> HTML 5</a>, <a href="https://publications.waset.org/abstracts/search?q=auscultation" title=" auscultation"> auscultation</a>, <a href="https://publications.waset.org/abstracts/search?q=bradycardia" title=" bradycardia"> bradycardia</a>, <a href="https://publications.waset.org/abstracts/search?q=tachycardia" title=" tachycardia"> tachycardia</a> </p> <a href="https://publications.waset.org/abstracts/36762/android-based-wireless-electronic-stethoscope" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36762.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4016</span> Spatial Audio Player Using Musical Genre Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun-Yong%20Lee">Jun-Yong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Gook%20Kim"> Hyoung-Gook Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20equalization" title="automatic equalization">automatic equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=genre%20classification" title=" genre classification"> genre classification</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20segment%20detection" title=" music segment detection"> music segment detection</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20audio%20processing" title=" spatial audio processing"> spatial audio processing</a> </p> <a href="https://publications.waset.org/abstracts/7561/spatial-audio-player-using-musical-genre-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7561.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">429</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4015</span> Robust Heart Sounds Segmentation Based on the Variation of the Phonocardiogram Curve Length</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mecheri%20Zeid%20Belmecheri">Mecheri Zeid Belmecheri</a>, <a href="https://publications.waset.org/abstracts/search?q=Maamar%20Ahfir"> Maamar Ahfir</a>, <a href="https://publications.waset.org/abstracts/search?q=Izzet%20Kale"> Izzet Kale</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic cardiac auscultation is still a subject of research in order to establish an objective diagnosis. Recorded heart sounds as Phonocardiogram signals (PCG) can be used for automatic segmentation into components that have clinical meanings. These are the first sound, S1, the second sound, S2, and the systolic and diastolic components, respectively. In this paper, an automatic method is proposed for the robust segmentation of heart sounds. This method is based on calculating an intermediate sawtooth-shaped signal from the length variation of the recorded Phonocardiogram (PCG) signal in the time domain and, using its positive derivative function that is a binary signal in training a Recurrent Neural Network (RNN). Results obtained in the context of a large database of recorded PCGs with their simultaneously recorded ElectroCardioGrams (ECGs) from different patients in clinical settings, including normal and abnormal subjects, show a segmentation testing performance average of 76 % sensitivity and 94 % specificity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heart%20sounds" title="heart sounds">heart sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=PCG%20segmentation" title=" PCG segmentation"> PCG segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=event%20detection" title=" event detection"> event detection</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20networks" title=" recurrent neural networks"> recurrent neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=PCG%20curve%20length" title=" PCG curve length"> PCG curve length</a> </p> <a href="https://publications.waset.org/abstracts/157289/robust-heart-sounds-segmentation-based-on-the-variation-of-the-phonocardiogram-curve-length" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4014</span> Heart Failure Identification and Progression by Classifying Cardiac Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Saqlain">Muhammad Saqlain</a>, <a href="https://publications.waset.org/abstracts/search?q=Nazar%20Abbas%20Saqib"> Nazar Abbas Saqib</a>, <a href="https://publications.waset.org/abstracts/search?q=Muazzam%20A.%20Khan"> Muazzam A. Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Heart Failure (HF) has become the major health problem in our society. The prevalence of HF has increased as the patient’s ages and it is the major cause of the high mortality rate in adults. A successful identification and progression of HF can be helpful to reduce the individual and social burden from this syndrome. In this study, we use a real data set of cardiac patients to propose a classification model for the identification and progression of HF. The data set has divided into three age groups, namely young, adult, and old and then each age group have further classified into four classes according to patient’s current physical condition. Contemporary Data Mining classification algorithms have been applied to each individual class of every age group to identify the HF. Decision Tree (DT) gives the highest accuracy of 90% and outperform all other algorithms. Our model accurately diagnoses different stages of HF for each age group and it can be very useful for the early prediction of HF. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decision%20tree" title="decision tree">decision tree</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20failure" title=" heart failure"> heart failure</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=classification%20model" title=" classification model"> classification model</a> </p> <a href="https://publications.waset.org/abstracts/62215/heart-failure-identification-and-progression-by-classifying-cardiac-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62215.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4013</span> Acoustic Performance and Application of Three Personalized Sound-Absorbing Materials</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fangying%20Wang">Fangying Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Sanming"> Zhang Sanming</a>, <a href="https://publications.waset.org/abstracts/search?q=Ni%20Qian"> Ni Qian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, more and more personalized sound absorbing materials have entered the Chinese room acoustical decoration market. The acoustic performance of three kinds of personalized sound-absorbing materials: Flame-retardant Flax Fiber Sound-absorbing Cotton, Eco-Friendly Sand Acoustic Panel and Transparent Micro-perforated Panel (Film) are tested by Reverberation Room Method. The sound absorption characteristic curves show that their performance match for or even exceed the traditional sound absorbing material. Through the application in the actual projects, these personalized sound-absorbing materials also proved their sound absorption ability and unique decorative effect. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20performance" title="acoustic performance">acoustic performance</a>, <a href="https://publications.waset.org/abstracts/search?q=application%20prospect%20personalized%20sound-absorbing%20materials" title=" application prospect personalized sound-absorbing materials"> application prospect personalized sound-absorbing materials</a> </p> <a href="https://publications.waset.org/abstracts/88980/acoustic-performance-and-application-of-three-personalized-sound-absorbing-materials" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4012</span> Automated Recognition of Still’s Murmur in Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sukryool%20Kang">Sukryool Kang</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20McConnaughey"> James McConnaughey</a>, <a href="https://publications.waset.org/abstracts/search?q=Robin%20Doroshow"> Robin Doroshow</a>, <a href="https://publications.waset.org/abstracts/search?q=Raj%20Shekhar"> Raj Shekhar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Still’s murmur, a vibratory heart murmur, is the most common normal innocent murmur of childhood. Many children with this murmur are unnecessarily referred for cardiology consultation and testing, which exacts a high cost financially and emotionally on the patients and their parents. Pediatricians to date are not successful at distinguishing Still’s murmur from murmurs of true heart disease. In this paper, we present a new algorithmic approach to distinguish Still’s murmur from pathological murmurs in children. We propose two distinct features, spectral width and signal power, which describe the sharpness of the spectrum and the signal intensity of the murmur, respectively. Seventy pediatric heart sound recordings of 41 Still’s and 29 pathological murmurs were used to develop and evaluate our algorithm that achieved a true positive rate of 97% and false positive rate of 0%. This approach would meet clinical standards in recognizing Still’s murmur. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AR%20modeling" title="AR modeling">AR modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=auscultation" title=" auscultation"> auscultation</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20murmurs" title=" heart murmurs"> heart murmurs</a>, <a href="https://publications.waset.org/abstracts/search?q=Still%27s%20murmur" title=" Still's murmur"> Still's murmur</a> </p> <a href="https://publications.waset.org/abstracts/26956/automated-recognition-of-stills-murmur-in-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26956.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4011</span> Comparing the Effect of Virtual Reality and Sound on Landscape Perception</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mark%20Lindquist">Mark Lindquist</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents preliminary results of exploratory empirical research investigating the effect of viewing 3D landscape visualizations in virtual reality compared to a computer monitor, and how sound impacts perception. Five landscape types were paired with three sound conditions (no sound, generic sound, realistic sound). Perceived realism, preference, recreational value, and biodiversity were evaluated in a controlled laboratory environment. Results indicate that sound has a larger perceptual impact than display mode regardless of sound source across all perceptual measures. The results are considered to assess how sound can impact landscape preference and spatiotemporal understanding. The paper concludes with a discussion of the impact on designers, planners, and the public and targets future research endeavors in this area. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=landscape%20experience" title="landscape experience">landscape experience</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=soundscape" title=" soundscape"> soundscape</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a> </p> <a href="https://publications.waset.org/abstracts/114889/comparing-the-effect-of-virtual-reality-and-sound-on-landscape-perception" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4010</span> Altered States of Consciousness in Narrative Cinema: Subjective Film Sound</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mladen%20Milicevic">Mladen Milicevic </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, subjective film sound will be addressed as it gets represented in narrative cinema. First, 'meta-diegetic' sound will be briefly explained followed by transition to “oneiric” sound. The representation of oneiric sound refers to a situation where film characters are experiencing some sort of an altered state of consciousness. Looking at an antlered state of consciousness in terms of human brain processes will point out to the cinematic ways of expression, which 'mimic' those processes. Using several examples for different films will illustrate these points. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=oneiric" title="oneiric">oneiric</a>, <a href="https://publications.waset.org/abstracts/search?q=ASC" title=" ASC"> ASC</a>, <a href="https://publications.waset.org/abstracts/search?q=film" title=" film"> film</a>, <a href="https://publications.waset.org/abstracts/search?q=sound" title=" sound "> sound </a> </p> <a href="https://publications.waset.org/abstracts/2901/altered-states-of-consciousness-in-narrative-cinema-subjective-film-sound" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2901.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">374</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4009</span> Prediction of Coronary Heart Disease Using Fuzzy Logic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elda%20Maraj">Elda Maraj</a>, <a href="https://publications.waset.org/abstracts/search?q=Shkelqim%20Kuka"> Shkelqim Kuka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Coronary heart disease causes many deaths in the world. Unfortunately, this problem will continue to increase in the future. In this paper, a fuzzy logic model to predict coronary heart disease is presented. This model has been developed with seven input variables and one output variable that was implemented for 30 patients in Albania. Here fuzzy logic toolbox of MATLAB is used. Fuzzy model inputs are considered as cholesterol, blood pressure, physical activity, age, BMI, smoking, and diabetes, whereas the output is the disease classification. The fuzzy sets and membership functions are chosen in an appropriate manner. Centroid method is used for defuzzification. The database is taken from University Hospital Center "Mother Teresa" in Tirana, Albania. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coronary%20heart%20disease" title="coronary heart disease">coronary heart disease</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic%20toolbox" title=" fuzzy logic toolbox"> fuzzy logic toolbox</a>, <a href="https://publications.waset.org/abstracts/search?q=membership%20function" title=" membership function"> membership function</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction%20model" title=" prediction model"> prediction model</a> </p> <a href="https://publications.waset.org/abstracts/148911/prediction-of-coronary-heart-disease-using-fuzzy-logic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148911.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4008</span> Mathematical Based Forecasting of Heart Attack</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Razieh%20Khalafi">Razieh Khalafi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Myocardial infarction (MI) or acute myocardial infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analyzing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behavior of these signals were checked. Results shows this methodology can forecast the ECG and accordingly heart attack with high accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heart%20attack" title="heart attack">heart attack</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG" title=" ECG"> ECG</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20walk" title=" random walk"> random walk</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation%20dimension" title=" correlation dimension"> correlation dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=forecasting" title=" forecasting"> forecasting</a> </p> <a href="https://publications.waset.org/abstracts/29782/mathematical-based-forecasting-of-heart-attack" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29782.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">541</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4007</span> A New Mathematical Method for Heart Attack Forecasting</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Razi%20Khalafi">Razi Khalafi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Myocardial Infarction (MI) or acute Myocardial Infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analysing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behaviour of these signals were checked. Results show this methodology can forecast the ECG and accordingly heart attack with high accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heart%20attack" title="heart attack">heart attack</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG" title=" ECG"> ECG</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20walk" title=" random walk"> random walk</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation%20dimension" title=" correlation dimension"> correlation dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=forecasting" title=" forecasting"> forecasting</a> </p> <a href="https://publications.waset.org/abstracts/30802/a-new-mathematical-method-for-heart-attack-forecasting" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30802.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4006</span> Heart Murmurs and Heart Sounds Extraction Using an Algorithm Process Separation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatima%20Mokeddem">Fatima Mokeddem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The phonocardiogram signal (PCG) is a physiological signal that reflects heart mechanical activity, is a promising tool for curious researchers in this field because it is full of indications and useful information for medical diagnosis. PCG segmentation is a basic step to benefit from this signal. Therefore, this paper presents an algorithm that serves the separation of heart sounds and heart murmurs in case they exist in order to use them in several applications and heart sounds analysis. The separation process presents here is founded on three essential steps filtering, envelope detection, and heart sounds segmentation. The algorithm separates the PCG signal into S1 and S2 and extract cardiac murmurs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phonocardiogram%20signal" title="phonocardiogram signal">phonocardiogram signal</a>, <a href="https://publications.waset.org/abstracts/search?q=filtering" title=" filtering"> filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=Envelope" title=" Envelope"> Envelope</a>, <a href="https://publications.waset.org/abstracts/search?q=Detection" title=" Detection"> Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=murmurs" title=" murmurs"> murmurs</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20sounds" title=" heart sounds"> heart sounds</a> </p> <a href="https://publications.waset.org/abstracts/114970/heart-murmurs-and-heart-sounds-extraction-using-an-algorithm-process-separation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114970.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4005</span> Screening of Congenital Heart Diseases with Fetal Phonocardiography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=F.%20Kov%C3%A1cs">F. Kovács</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20K%C3%A1d%C3%A1r"> K. Kádár</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Hossz%C3%BA"> G. Hosszú</a>, <a href="https://publications.waset.org/abstracts/search?q=%C3%81.%20T.%20Balogh"> Á. T. Balogh</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Zsedrovits"> T. Zsedrovits</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Kersner"> N. Kersner</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Nagy"> A. Nagy</a>, <a href="https://publications.waset.org/abstracts/search?q=Gy.%20Jeney"> Gy. Jeney</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents a novel screening method to indicate congenital heart diseases (CHD), which otherwise could remain undetected because of their low level. Therefore, not belonging to the high-risk population, the pregnancies are not subject to the regular fetal monitoring with ultrasound echocardiography. Based on the fact that CHD is a morphological defect of the heart causing turbulent blood flow, the turbulence appears as a murmur, which can be detected by fetal phonocardiography (fPCG). The proposed method applies measurements on the maternal abdomen and from the recorded sound signal a sophisticated processing determines the fetal heart murmur. The paper describes the problems and the additional advantages of the fPCG method including the possibility of measurements at home and its combination with the prescribed regular cardiotocographic (CTG) monitoring. The proposed screening process implemented on a telemedicine system provides an enhanced safety against hidden cardiac diseases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cardiac%20murmurs" title="cardiac murmurs">cardiac murmurs</a>, <a href="https://publications.waset.org/abstracts/search?q=fetal%20phonocardiography" title=" fetal phonocardiography"> fetal phonocardiography</a>, <a href="https://publications.waset.org/abstracts/search?q=screening%20of%20CHDs" title=" screening of CHDs"> screening of CHDs</a>, <a href="https://publications.waset.org/abstracts/search?q=telemedicine%20system" title=" telemedicine system"> telemedicine system</a> </p> <a href="https://publications.waset.org/abstracts/28578/screening-of-congenital-heart-diseases-with-fetal-phonocardiography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">332</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4004</span> Research on the Two-Way Sound Absorption Performance of Multilayer Material</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Song">Yang Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojun%20Qiu"> Xiaojun Qiu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multilayer materials are applied to much acoustics area. Multilayer porous materials are dominant in room absorber. Multilayer viscoelastic materials are the basic parts in underwater absorption coating. In most cases, the one-way sound absorption performance of multilayer material is concentrated according to the sound source site. But the two-way sound absorption performance is also necessary to be known in some special cases which sound is produced in both sides of the material and the both sides especially might contact with different media. In this article, this kind of case was research. The multilayer material was composed of viscoelastic layer and steel plate and the porous layer. The two sides of multilayer material contact with water and air, respectively. A theory model was given to describe the sound propagation and impedance in multilayer absorption material. The two-way sound absorption properties of several multilayer materials were calculated whose two sides all contacted with different media. The calculated results showed that the difference of two-way sound absorption coefficients is obvious. The frequency, the relation of layers thickness and parameters of multilayer materials all have an influence on the two-way sound absorption coefficients. But the degrees of influence are varied. All these simulation results were analyzed in the article. It was obtained that two-way sound absorption at different frequencies can be promoted by optimizing the configuration parameters. This work will improve the performance of underwater sound absorption coating which can absorb incident sound from the water and reduce the noise radiation from inside space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=different%20media" title="different media">different media</a>, <a href="https://publications.waset.org/abstracts/search?q=multilayer%20material" title=" multilayer material"> multilayer material</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20absorption%20coating" title=" sound absorption coating"> sound absorption coating</a>, <a href="https://publications.waset.org/abstracts/search?q=two-way%20sound%20absorption" title=" two-way sound absorption"> two-way sound absorption</a> </p> <a href="https://publications.waset.org/abstracts/33628/research-on-the-two-way-sound-absorption-performance-of-multilayer-material" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">542</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4003</span> Experimental Study of the Sound Absorption of a Geopolymer Panel with a Textile Component Designed for a Railway Corridor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ludmila%20Fridrichov%C3%A1">Ludmila Fridrichová</a>, <a href="https://publications.waset.org/abstracts/search?q=Roman%20Kn%C3%AD%C5%BEek"> Roman Knížek</a>, <a href="https://publications.waset.org/abstracts/search?q=Pavel%20N%C4%9Bme%C4%8Dek"> Pavel Němeček</a>, <a href="https://publications.waset.org/abstracts/search?q=Katarzyna%20Ewa%20Buczkowska"> Katarzyna Ewa Buczkowska</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The design of the sound absorption panel, which consists of three layers, is presented in this study. The first layer of the panel is perforated and provides sound transmission to the inner part of the panel. The second layer is composed of a bulk material whose purpose is to absorb as much noise as possible. The third layer of the panel has two functions: the first function is to ensure the strength of the panel, and the second function is to reflect the sound back into the bulk layer. Experimental results have shown that the size of the holes in the perforated panel affects the sound absorption of the required frequency. The percentage of filling of the perforated area affects the quantity of sound absorbed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sound%20absorption" title="sound absorption">sound absorption</a>, <a href="https://publications.waset.org/abstracts/search?q=railway%20corridor" title=" railway corridor"> railway corridor</a>, <a href="https://publications.waset.org/abstracts/search?q=health" title=" health"> health</a>, <a href="https://publications.waset.org/abstracts/search?q=textile%20waste" title=" textile waste"> textile waste</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20fibres" title=" natural fibres"> natural fibres</a>, <a href="https://publications.waset.org/abstracts/search?q=concrete" title=" concrete"> concrete</a> </p> <a href="https://publications.waset.org/abstracts/193093/experimental-study-of-the-sound-absorption-of-a-geopolymer-panel-with-a-textile-component-designed-for-a-railway-corridor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">15</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4002</span> Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoshio%20Kurosawa">Yoshio Kurosawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Takao%20Yamaguchi"> Takao Yamaguchi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy for designers. In this report, the outline of this tool and an analysis example applied to floor mat are introduced. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automobile" title="automobile">automobile</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustics" title=" acoustics"> acoustics</a>, <a href="https://publications.waset.org/abstracts/search?q=porous%20material" title=" porous material"> porous material</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20matrix%20method" title=" transfer matrix method"> transfer matrix method</a> </p> <a href="https://publications.waset.org/abstracts/32532/development-of-prediction-tool-for-sound-absorption-and-sound-insulation-for-sound-proof-properties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">509</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4001</span> Evaluating Classification with Efficacy Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guofan%20Shao">Guofan Shao</a>, <a href="https://publications.waset.org/abstracts/search?q=Lina%20Tang"> Lina Tang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Zhang"> Hao Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The values of image classification accuracy are affected by class size distributions and classification schemes, making it difficult to compare the performance of classification algorithms across different remote sensing data sources and classification systems. Based on the term efficacy from medicine and pharmacology, we have developed the metrics of image classification efficacy at the map and class levels. The novelty of this approach is that a baseline classification is involved in computing image classification efficacies so that the effects of class statistics are reduced. Furthermore, the image classification efficacies are interpretable and comparable, and thus, strengthen the assessment of image data classification methods. We use real-world and hypothetical examples to explain the use of image classification efficacies. The metrics of image classification efficacy meet the critical need to rectify the strategy for the assessment of image classification performance as image classification methods are becoming more diversified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy%20assessment" title="accuracy assessment">accuracy assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=efficacy" title=" efficacy"> efficacy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty" title=" uncertainty"> uncertainty</a> </p> <a href="https://publications.waset.org/abstracts/142555/evaluating-classification-with-efficacy-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142555.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4000</span> The Effect of Floor Impact Sound Insulation Performance Using Scrambled Thermoplastic Poly Urethane and Ethylene Vinyl Acetate</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bonsoo%20Koo">Bonsoo Koo</a>, <a href="https://publications.waset.org/abstracts/search?q=Seong%20Shin%20Hong"> Seong Shin Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Byung%20Kwon%20Lee"> Byung Kwon Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of apartments in Korea have wall type structure that present poor performance regarding floor impact sound insulation. In order to minimize the transmission of floor impact sound, flooring structures are used in which an insulating material, 30 mm thickness pad of EPS or EVA, is sandwiched between a concrete slab and the finished mortar. Generally, a single-material pad used for insulation has a heavyweight impact sound level of 44~47 dB with 210 mm thickness slab. This study provides an analysis of the floor impact sound insulation performance using thermoplastic poly urethane (TPU), ethylene vinyl acetate (EVA), and expanded polystyrene (EPS) materials with buffering performance. Following mock-up tests the effect of lightweight impact sound turned out to be similar but heavyweight impact sound was decreased by 3 dB compared to conventional single material insulation pad. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=floor%20impact%20sound" title="floor impact sound">floor impact sound</a>, <a href="https://publications.waset.org/abstracts/search?q=thermoplastic%20poly%20urethane" title=" thermoplastic poly urethane"> thermoplastic poly urethane</a>, <a href="https://publications.waset.org/abstracts/search?q=ethylene%20vinyl%20acetate" title=" ethylene vinyl acetate"> ethylene vinyl acetate</a>, <a href="https://publications.waset.org/abstracts/search?q=heavyweight%20impact%20sound" title=" heavyweight impact sound"> heavyweight impact sound</a> </p> <a href="https://publications.waset.org/abstracts/84146/the-effect-of-floor-impact-sound-insulation-performance-using-scrambled-thermoplastic-poly-urethane-and-ethylene-vinyl-acetate" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84146.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">404</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3999</span> A Review on Predictive Sound Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ajay%20Kadam">Ajay Kadam</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramesh%20Kagalkar"> Ramesh Kagalkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The proposed research objective is to add to a framework for programmed recognition of sound. In this framework the real errand is to distinguish any information sound stream investigate it & anticipate the likelihood of diverse sounds show up in it. To create and industrially conveyed an adaptable sound web crawler a flexible sound search engine. The calculation is clamor and contortion safe, computationally productive, and hugely adaptable, equipped for rapidly recognizing a short portion of sound stream caught through a phone microphone in the presence of frontal area voices and other predominant commotion, and through voice codec pressure, out of a database of over accessible tracks. The algorithm utilizes a combinatorial hashed time-recurrence group of stars examination of the sound, yielding ordinary properties, for example, transparency, in which numerous tracks combined may each be distinguished. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fingerprinting" title="fingerprinting">fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=pure%20tone" title=" pure tone"> pure tone</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20noise" title=" white noise"> white noise</a>, <a href="https://publications.waset.org/abstracts/search?q=hash%20function" title=" hash function"> hash function</a> </p> <a href="https://publications.waset.org/abstracts/33296/a-review-on-predictive-sound-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3998</span> Finding the Free Stream Velocity Using Flow Generated Sound</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Hosseini">Saeed Hosseini</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Reza%20Tahavvor"> Ali Reza Tahavvor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sound processing is one the subjects that newly attracts a lot of researchers. It is efficient and usually less expensive than other methods. In this paper the flow generated sound is used to estimate the flow speed of free flows. Many sound samples are gathered. After analyzing the data, a parameter named wave power is chosen. For all samples, the wave power is calculated and averaged for each flow speed. A curve is fitted to the averaged data and a correlation between the wave power and flow speed is founded. Test data are used to validate the method and errors for all test data were under 10 percent. The speed of the flow can be estimated by calculating the wave power of the flow generated sound and using the proposed correlation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20flow%20generated%20sound" title="the flow generated sound">the flow generated sound</a>, <a href="https://publications.waset.org/abstracts/search?q=free%20stream" title=" free stream"> free stream</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20processing" title=" sound processing"> sound processing</a>, <a href="https://publications.waset.org/abstracts/search?q=speed" title=" speed"> speed</a>, <a href="https://publications.waset.org/abstracts/search?q=wave%20power" title=" wave power"> wave power</a> </p> <a href="https://publications.waset.org/abstracts/35611/finding-the-free-stream-velocity-using-flow-generated-sound" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35611.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">415</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3997</span> Sound Instance: Art, Perception and Composition through Soundscapes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Mestre">Ricardo Mestre</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The soundscape stands out as an agglomeration of sounds available in the world, associated with different contexts and origins, being a theme studied by various areas of knowledge, seeking to guide their benefits and their consequences, contributing to the welfare of society and other ecosystems. Murray Schafer, the author who originally developed this concept, highlights the need for a greater recognition of sound reality, through the selection and differentiation of sounds, contributing to a tuning of the world and to the balance and well-being of humanity. According to some authors sound environment, produced and created in various ways, provides various sources of information, contributing to the orientation of the human being, alerting and manipulating him during his daily journey, like small notifications received on a cell phone or other device with these features. In this way, it becomes possible to give sound its due importance in relation to the processes of individual representation, in manners of social, professional and emotional life. Ensuring an individual representation means providing the human being with new tools for the long process of reflection by recognizing his environment, the sounds that represent him, and his perspective on his respective function in it. In order to provide more information about the importance of the sound environment inherent to the individual reality, one introduces the term sound instance, in order to refer to the whole sound field existing in the individual's life, which is divided into four distinct subfields, but essential to the process of individual representation, called sound matrix, sound cycles, sound traces and sound interference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sound%20instance" title="sound instance">sound instance</a>, <a href="https://publications.waset.org/abstracts/search?q=soundscape" title=" soundscape"> soundscape</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20art" title=" sound art"> sound art</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=composition" title=" composition"> composition</a> </p> <a href="https://publications.waset.org/abstracts/155181/sound-instance-art-perception-and-composition-through-soundscapes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155181.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3996</span> An Approach for Vocal Register Recognition Based on Spectral Analysis of Singing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aleksandra%20Zysk">Aleksandra Zysk</a>, <a href="https://publications.waset.org/abstracts/search?q=Pawel%20Badura"> Pawel Badura</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognizing and controlling vocal registers during singing is a difficult task for beginner vocalist. It requires among others identifying which part of natural resonators is being used when a sound propagates through the body. Thus, an application has been designed allowing for sound recording, automatic vocal register recognition (VRR), and a graphical user interface providing real-time visualization of the signal and recognition results. Six spectral features are determined for each time frame and passed to the support vector machine classifier yielding a binary decision on the head or chest register assignment of the segment. The classification training and testing data have been recorded by ten professional female singers (soprano, aged 19-29) performing sounds for both chest and head register. The classification accuracy exceeded 93% in each of various validation schemes. Apart from a hard two-class clustering, the support vector classifier returns also information on the distance between particular feature vector and the discrimination hyperplane in a feature space. Such an information reflects the level of certainty of the vocal register classification in a fuzzy way. Thus, the designed recognition and training application is able to assess and visualize the continuous trend in singing in a user-friendly graphical mode providing an easy way to control the vocal emission. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=singing" title=" singing"> singing</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20analysis" title=" spectral analysis"> spectral analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=vocal%20emission" title=" vocal emission"> vocal emission</a>, <a href="https://publications.waset.org/abstracts/search?q=vocal%20register" title=" vocal register"> vocal register</a> </p> <a href="https://publications.waset.org/abstracts/65464/an-approach-for-vocal-register-recognition-based-on-spectral-analysis-of-singing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65464.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3995</span> Analysis of Sound Loss from the Highway Traffic through Lightweight Insulating Concrete Walls and Artificial Neural Network Modeling of Sound Transmission</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20Tosun">Mustafa Tosun</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevser%20Dincer"> Kevser Dincer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, analysis on whether the lightweight concrete walled structures used in four climatic regions of Turkey are also capable of insulating sound was conducted. As a new approach, first the wall’s thermal insulation sufficiency’s were calculated and then, artificial neural network (ANN) modeling was used on their cross sections to check if they are sound transmitters too. The ANN was trained and tested by using MATLAB toolbox on a personal computer. ANN input parameters that used were thickness of lightweight concrete wall, frequency and density of lightweight concrete wall, while the transmitted sound was the output parameter. When the results of the TS analysis and those of ANN modeling are evaluated together, it is found from this study, that sound transmit loss increases at higher frequencies, higher wall densities and with larger wall cross sections. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neuron%20network" title="artificial neuron network">artificial neuron network</a>, <a href="https://publications.waset.org/abstracts/search?q=lightweight%20concrete" title=" lightweight concrete"> lightweight concrete</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20insulation" title=" sound insulation"> sound insulation</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20transmit%20loss" title=" sound transmit loss"> sound transmit loss</a> </p> <a href="https://publications.waset.org/abstracts/41076/analysis-of-sound-loss-from-the-highway-traffic-through-lightweight-insulating-concrete-walls-and-artificial-neural-network-modeling-of-sound-transmission" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41076.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">252</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3994</span> Design of a Real Time Heart Sounds Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omer%20Abdalla%20Ishag">Omer Abdalla Ishag</a>, <a href="https://publications.waset.org/abstracts/search?q=Magdi%20Baker%20Amien"> Magdi Baker Amien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Physicians used the stethoscope for listening patient heart sounds in order to make a diagnosis. However, the determination of heart conditions by acoustic stethoscope is a difficult task so it requires special training of medical staff. This study developed an accurate model for analyzing the phonocardiograph signal based on PC and DSP processor. The system has been realized into two phases; offline and real time phase. In offline phase, 30 cases of heart sounds files were collected from medical students and doctor's world website. For experimental phase (real time), an electronic stethoscope has been designed, implemented and recorded signals from 30 volunteers, 17 were normal cases and 13 were various pathologies cases, these acquired 30 signals were preprocessed using an adaptive filter to remove lung sounds. The background noise has been removed from both offline and real data, using wavelet transform, then graphical and statistics features vector elements were extracted, finally a look-up table was used for classification heart sounds cases. The obtained results of the implemented system showed accuracy of 90%, 80% and sensitivity of 87.5%, 82.4% for offline data, and real data respectively. The whole system has been designed on TMS320VC5509a DSP Platform. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=code%20composer%20studio" title="code composer studio">code composer studio</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20sounds" title=" heart sounds"> heart sounds</a>, <a href="https://publications.waset.org/abstracts/search?q=phonocardiograph" title=" phonocardiograph"> phonocardiograph</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20transform" title=" wavelet transform"> wavelet transform</a> </p> <a href="https://publications.waset.org/abstracts/37634/design-of-a-real-time-heart-sounds-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37634.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3993</span> A Dynamic Solution Approach for Heart Disease Prediction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Walid%20Moudani">Walid Moudani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the coronary heart disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts’ knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-classifier%20decisions%20tree" title="multi-classifier decisions tree">multi-classifier decisions tree</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20reduction" title=" features reduction"> features reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20programming" title=" dynamic programming"> dynamic programming</a>, <a href="https://publications.waset.org/abstracts/search?q=rough%20sets" title=" rough sets"> rough sets</a> </p> <a href="https://publications.waset.org/abstracts/7975/a-dynamic-solution-approach-for-heart-disease-prediction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=134">134</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=135">135</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=heart%20sound%20classification&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>