CINXE.COM

Search results for: Ranz Brendan D. Gabor

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Ranz Brendan D. Gabor</title> <meta name="description" content="Search results for: Ranz Brendan D. Gabor"> <meta name="keywords" content="Ranz Brendan D. Gabor"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Ranz Brendan D. Gabor" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Ranz Brendan D. Gabor"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 73</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Ranz Brendan D. Gabor</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">73</span> 2.5D Face Recognition Using Gabor Discrete Cosine Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Cheraghian">Ali Cheraghian</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Hajati"> Farshid Hajati</a>, <a href="https://publications.waset.org/abstracts/search?q=Soheila%20Gheisari"> Soheila Gheisari</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongsheng%20Gao"> Yongsheng Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a novel 2.5D face recognition method based on Gabor Discrete Cosine Transform (GDCT). In the proposed method, the Gabor filter is applied to extract feature vectors from the texture and the depth information. Then, Discrete Cosine Transform (DCT) is used for dimensionality and redundancy reduction to improve computational efficiency. The system is combined texture and depth information in the decision level, which presents higher performance compared to methods, which use texture and depth information, separately. The proposed algorithm is examined on publically available Bosphorus database including models with pose variation. The experimental results show that the proposed method has a higher performance compared to the benchmark. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title="Gabor filter">Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20cosine%20transform" title=" discrete cosine transform"> discrete cosine transform</a>, <a href="https://publications.waset.org/abstracts/search?q=2.5d%20face%20recognition" title=" 2.5d face recognition"> 2.5d face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pose" title=" pose"> pose</a> </p> <a href="https://publications.waset.org/abstracts/37341/25d-face-recognition-using-gabor-discrete-cosine-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">328</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">72</span> Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Hari%20Prasath">T. Hari Prasath</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Ithaya%20Rani"> P. Ithaya Rani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detecting%20face" title="detecting face">detecting face</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-class%20AdaBoost%20classifier" title=" multi-class AdaBoost classifier"> multi-class AdaBoost classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=Z-score%20normalization" title=" Z-score normalization"> Z-score normalization</a> </p> <a href="https://publications.waset.org/abstracts/85005/dynamic-gabor-filter-facial-features-based-recognition-of-emotion-in-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85005.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">71</span> Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A%3A%20Annis%20Fathima">A: Annis Fathima</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Vaidehi"> V. Vaidehi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ajitha"> S. Ajitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20wavelet" title=" Gabor wavelet"> Gabor wavelet</a>, <a href="https://publications.waset.org/abstracts/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/abstracts/search?q=k-NN%20classifier" title=" k-NN classifier"> k-NN classifier</a> </p> <a href="https://publications.waset.org/abstracts/11196/hybrid-approach-for-face-recognition-combining-gabor-wavelet-and-linear-discriminant-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11196.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">70</span> Diversity of Rhopalocera in Different Vegetation Types of PC Hills, Philippines</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sean%20E.%20Gregory%20P.%20Igano">Sean E. Gregory P. Igano</a>, <a href="https://publications.waset.org/abstracts/search?q=Ranz%20Brendan%20D.%20Gabor"> Ranz Brendan D. Gabor</a>, <a href="https://publications.waset.org/abstracts/search?q=Baron%20Arthur%20M.%20Cabalona"> Baron Arthur M. Cabalona</a>, <a href="https://publications.waset.org/abstracts/search?q=Numeriano%20Amer%20E.%20Gutierrez"> Numeriano Amer E. Gutierrez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Distribution patterns and abundance of butterflies respond in the long term to variations in habitat quality. Studying butterfly populations would give evidence on how vegetation types influence their diversity. In this research, the Rhopalocera diversity of PC Hills was assessed to provide information on diversity trends in varying vegetation types. PC Hills, located in Palo, Leyte, Philippines, is a relatively undisturbed area having forests and rivers. Despite being situated nearby inhabited villages; the area is observed to have a possible rich butterfly population. To assess the Rhopalocera species richness and diversity, transect sampling technique was applied to monitor and document butterflies. Transects were placed in locations that can be mapped, described and relocated easily. Three transects measuring three hundred meters each with a 5-meter diameter were established based on the different vegetation types present. The three main vegetation types identified were the agroecosystem (transect 1), dipterocarp forest (transect 2), and riparian (transect 3). Sample collections were done only from 9:00 A.M to 3:00 P.M. under warm and bright weather, with no more than moderate winds and when it was not raining. When weather conditions did not permit collection, it was moved to another day. A GPS receiver was used to record the location of the selected sample sites and the coordinates of where each sample was collected. Morphological analysis was done for the first phase of the study to identify the voucher specimen to the lowest taxonomic level possible using books about butterfly identification guides and species lists as references. For the second phase, DNA barcoding will be used to further identify the voucher specimen into the species taxonomic level. After eight (8) sampling sessions, seven hundred forty-two (742) individuals were seen, and twenty-two (22) Rhopalocera genera were identified through morphological identification. Nymphalidae family of genus Ypthima and the Pieridae family of genera Eurema and Leptosia were the most dominant species observed. Twenty (20) of the thirty-one (31) voucher specimen were already identified to their species taxonomic level using DNA Barcoding. Shannon-Weiner index showed that the highest diversity level was observed in the third transect (H’ = 2.947), followed by the second transect (H’ = 2.6317) and the lowest being in the first transect (H’ = 1.767). This indicates that butterflies are likely to inhabit dipterocarp and riparian vegetation types than agroecosystem, which influences their species composition and diversity. Moreover, the appearance of a river in the riparian vegetation supported its diversity value since butterflies have the tendency to fly into areas near rivers. Species identification of other voucher specimen will be done in order to compute the overall species richness in PC Hills. Further butterfly sampling sessions of PC Hills is recommended for a more reliable diversity trend and to discover more butterfly species. Expanding the research by assessing the Rhopalocera diversity in other locations should be considered along with studying factors that affect butterfly species composition other than vegetation types. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distribution%20patterns" title="distribution patterns">distribution patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=DNA%20barcoding" title=" DNA barcoding"> DNA barcoding</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20analysis" title=" morphological analysis"> morphological analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=Rhopalocera" title=" Rhopalocera"> Rhopalocera</a> </p> <a href="https://publications.waset.org/abstracts/99781/diversity-of-rhopalocera-in-different-vegetation-types-of-pc-hills-philippines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99781.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">154</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">69</span> A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Bryan">Thomas Bryan</a>, <a href="https://publications.waset.org/abstracts/search?q=Veton%20Kepuska"> Veton Kepuska</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivica%20Kostanic"> Ivica Kostanic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30dB SNR as a reference for voice activity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=atomic%20decomposition" title="atomic decomposition">atomic decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=gabor" title=" gabor"> gabor</a>, <a href="https://publications.waset.org/abstracts/search?q=gammatone" title=" gammatone"> gammatone</a>, <a href="https://publications.waset.org/abstracts/search?q=matching%20pursuit" title=" matching pursuit"> matching pursuit</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detection" title=" voice activity detection"> voice activity detection</a> </p> <a href="https://publications.waset.org/abstracts/27613/a-simple-adaptive-atomic-decomposition-voice-activity-detector-implemented-by-matching-pursuit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27613.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">290</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">68</span> Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bamidele%20Samson%20Alobalorun">Bamidele Samson Alobalorun</a>, <a href="https://publications.waset.org/abstracts/search?q=Ifedotun%20Roseline%20Idowu"> Ifedotun Roseline Idowu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daugman%20rubber%20sheet" title="Daugman rubber sheet">Daugman rubber sheet</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamming%20distance" title=" Hamming distance"> Hamming distance</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition%20system" title=" iris recognition system"> iris recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=2D%20Gabor%20wavelet%20transform" title=" 2D Gabor wavelet transform"> 2D Gabor wavelet transform</a> </p> <a href="https://publications.waset.org/abstracts/170345/iris-feature-extraction-and-recognition-based-on-two-dimensional-gabor-wavelength-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170345.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">65</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">67</span> Feature Extraction Based on Contourlet Transform and Log Gabor Filter for Detection of Ulcers in Wireless Capsule Endoscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nimisha%20Elsa%20Koshy">Nimisha Elsa Koshy</a>, <a href="https://publications.waset.org/abstracts/search?q=Varun%20P.%20Gopi"> Varun P. Gopi</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20I.%20Thajudin%20Ahamed"> V. I. Thajudin Ahamed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The entire visualization of GastroIntestinal (GI) tract is not possible with conventional endoscopic exams. Wireless Capsule Endoscopy (WCE) is a low risk, painless, noninvasive procedure for diagnosing diseases such as bleeding, polyps, ulcers, and Crohns disease within the human digestive tract, especially the small intestine that was unreachable using the traditional endoscopic methods. However, analysis of massive images of WCE detection is tedious and time consuming to physicians. Hence, researchers have developed software methods to detect these diseases automatically. Thus, the effectiveness of WCE can be improved. In this paper, a novel textural feature extraction method is proposed based on Contourlet transform and Log Gabor filter to distinguish ulcer regions from normal regions. The results show that the proposed method performs well with a high accuracy rate of 94.16% using Support Vector Machine (SVM) classifier in HSV colour space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contourlet%20transform" title="contourlet transform">contourlet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=log%20gabor%20filter" title=" log gabor filter"> log gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=ulcer" title=" ulcer"> ulcer</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20capsule%20endoscopy" title=" wireless capsule endoscopy"> wireless capsule endoscopy</a> </p> <a href="https://publications.waset.org/abstracts/17330/feature-extraction-based-on-contourlet-transform-and-log-gabor-filter-for-detection-of-ulcers-in-wireless-capsule-endoscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">540</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">66</span> Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hadeer%20R.%20M.%20Tawfik">Hadeer R. M. Tawfik</a>, <a href="https://publications.waset.org/abstracts/search?q=Rania%20A.%20K.%20Birry"> Rania A. K. Birry</a>, <a href="https://publications.waset.org/abstracts/search?q=Amani%20A.%20Saad"> Amani A. Saad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyes are considered to be the most sensitive and important organ for human being. Thus, any eye disorder will affect the patient in all aspects of life. Cataract is one of those eye disorders that lead to blindness if not treated correctly and quickly. This paper demonstrates a model for automatic detection, classification, and grading of cataracts based on image processing techniques and artificial intelligence. The proposed system is developed to ease the cataract diagnosis process for both ophthalmologists and patients. The wavelet transform combined with 2D Log Gabor Wavelet transform was used as feature extraction techniques for a dataset of 120 eye images followed by a classification process that classified the image set into three classes; normal, early, and advanced stage. A comparison between the two used classifiers, the support vector machine SVM and the artificial neural network ANN were done for the same dataset of 120 eye images. It was concluded that SVM gave better results than ANN. SVM success rate result was 96.8% accuracy where ANN success rate result was 92.3% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cataract" title="cataract">cataract</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=grading" title=" grading"> grading</a>, <a href="https://publications.waset.org/abstracts/search?q=log-gabor" title=" log-gabor"> log-gabor</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet" title=" wavelet"> wavelet</a> </p> <a href="https://publications.waset.org/abstracts/101464/early-recognition-and-grading-of-cataract-using-a-combined-log-gabordiscrete-wavelet-transform-with-ann-and-svm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101464.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">332</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">65</span> Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Bryan">T. Bryan </a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Kepuska"> V. Kepuska</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Kostnaic"> I. Kostnaic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sparse%20dictionary%20learning" title="sparse dictionary learning">sparse dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=autoencoder" title=" autoencoder"> autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20autoencoder" title=" sparse autoencoder"> sparse autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=basis%20vectors" title=" basis vectors"> basis vectors</a>, <a href="https://publications.waset.org/abstracts/search?q=atomic%20decomposition" title=" atomic decomposition"> atomic decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=envelope%20sampling" title=" envelope sampling"> envelope sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=envelope%20samples" title=" envelope samples"> envelope samples</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor" title=" Gabor"> Gabor</a>, <a href="https://publications.waset.org/abstracts/search?q=gammatone" title=" gammatone"> gammatone</a>, <a href="https://publications.waset.org/abstracts/search?q=matching%20pursuit" title=" matching pursuit"> matching pursuit</a> </p> <a href="https://publications.waset.org/abstracts/42586/atomic-decomposition-audio-data-compression-and-denoising-using-sparse-dictionary-feature-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42586.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">252</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">64</span> Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Biran">A. Biran</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sobhe%20Bidari"> P. Sobhe Bidari</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Almazroe"> A. Almazroe</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Lakshminarayanan"> V. Lakshminarayanan</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Raahemifar"> K. Raahemifar </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=STARE" title=" STARE"> STARE</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/49824/automatic-detection-and-classification-of-diabetic-retinopathy-using-retinal-fundus-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49824.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">63</span> Periodicity Analysis of Long-Term Waterquality Data Series of the Hungarian Section of the River Tisza Using Morlet Wavelet Spectrum Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P%C3%A9ter%20Tanos">Péter Tanos</a>, <a href="https://publications.waset.org/abstracts/search?q=J%C3%B3zsef%20Kov%C3%A1cs"> József Kovács</a>, <a href="https://publications.waset.org/abstracts/search?q=Ang%C3%A9la%20Anda"> Angéla Anda</a>, <a href="https://publications.waset.org/abstracts/search?q=G%C3%A1bor%20V%C3%A1rb%C3%ADr%C3%B3"> Gábor Várbíró</a>, <a href="https://publications.waset.org/abstracts/search?q=S%C3%A1ndor%20Moln%C3%A1r"> Sándor Molnár</a>, <a href="https://publications.waset.org/abstracts/search?q=Istv%C3%A1n%20G%C3%A1bor%20Hatvani"> István Gábor Hatvani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The River Tisza is the second largest river in Central Europe. In this study, Morlet wavelet spectrum (periodicity) analysis was used with chemical, biological and physical water quality data for the Hungarian section of the River Tisza. In the research 15, water quality parameters measured at 14 sampling sites in the River Tisza and 4 sampling sites in the main artificial changes were assessed for the time period 1993 - 2005. Results show that annual periodicity was not always to be found in the water quality parameters, at least at certain sampling sites. Periodicity was found to vary over space and time, but in general, an increase was observed in the company of higher trophic states of the river heading downstream. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annual%20periodicity%20water%20quality" title="annual periodicity water quality">annual periodicity water quality</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiotemporal%20variability%20of%20periodic%20behavior" title=" spatiotemporal variability of periodic behavior"> spatiotemporal variability of periodic behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=Morlet%20wavelet%20spectrum%20analysis" title=" Morlet wavelet spectrum analysis"> Morlet wavelet spectrum analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=River%20Tisza" title=" River Tisza"> River Tisza</a> </p> <a href="https://publications.waset.org/abstracts/60822/periodicity-analysis-of-long-term-waterquality-data-series-of-the-hungarian-section-of-the-river-tisza-using-morlet-wavelet-spectrum-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">62</span> Labview-Based System for Fiber Links Events Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo%20Liu">Bo Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Qingshan%20Kong"> Qingshan Kong</a>, <a href="https://publications.waset.org/abstracts/search?q=Weiqing%20Huang"> Weiqing Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=empirical%20mode%20decomposition" title="empirical mode decomposition">empirical mode decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=events%20detection" title=" events detection"> events detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20transform" title=" Gabor transform"> Gabor transform</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20time%20domain%20reflectometer" title=" optical time domain reflectometer"> optical time domain reflectometer</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20threshold%20denoising" title=" wavelet threshold denoising"> wavelet threshold denoising</a> </p> <a href="https://publications.waset.org/abstracts/105512/labview-based-system-for-fiber-links-events-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105512.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">61</span> Automatic Target Recognition in SAR Images Based on Sparse Representation Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmet%20Karagoz">Ahmet Karagoz</a>, <a href="https://publications.waset.org/abstracts/search?q=Irfan%20Karagoz"> Irfan Karagoz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20target%20recognition" title="automatic target recognition">automatic target recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=SAR%20images" title=" SAR images"> SAR images</a> </p> <a href="https://publications.waset.org/abstracts/71185/automatic-target-recognition-in-sar-images-based-on-sparse-representation-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71185.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">60</span> Classification of Coughing and Breathing Activities Using Wearable and a Light-Weight DL Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subham%20Ghosh">Subham Ghosh</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnab%20Nandi"> Arnab Nandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: The proliferation of Wireless Body Area Networks (WBAN) and Internet of Things (IoT) applications demonstrates the potential for continuous monitoring of physical changes in the body. These technologies are vital for health monitoring tasks, such as identifying coughing and breathing activities, which are necessary for disease diagnosis and management. Monitoring activities such as coughing and deep breathing can provide valuable insights into a variety of medical issues. Wearable radio-based antenna sensors, which are lightweight and easy to incorporate into clothing or portable goods, provide continuous monitoring. This mobility gives it a substantial advantage over stationary environmental sensors like as cameras and radar, which are constrained to certain places. Furthermore, using compressive techniques provides benefits such as reduced data transmission speeds and memory needs. These wearable sensors offer more advanced and diverse health monitoring capabilities. Methodology: This study analyzes the feasibility of using a semi-flexible antenna operating at 2.4 GHz (ISM band) and positioned around the neck and near the mouth to identify three activities: coughing, deep breathing, and idleness. Vector network analyzer (VNA) is used to collect time-varying complex reflection coefficient data from perturbed antenna nearfield. The reflection coefficient (S11) conveys nuanced information caused by simultaneous variations in the nearfield radiation of three activities across time. The signatures are sparsely represented with gaussian windowed Gabor spectrograms. The Gabor spectrogram is used as a sparse representation approach, which reassigns the ridges of the spectrogram images to improve their resolution and focus on essential components. The antenna is biocompatible in terms of specific absorption rate (SAR). The sparsely represented Gabor spectrogram pictures are fed into a lightweight deep learning (DL) model for feature extraction and classification. Two antenna locations are investigated in order to determine the most effective localization for three different activities. Findings: Cross-validation techniques were used on data from both locations. Due to the complex form of the recorded S11, separate analyzes and assessments were performed on the magnitude, phase, and their combination. The combination of magnitude and phase fared better than the separate analyses. Various sliding window sizes, ranging from 1 to 5 seconds, were tested to find the best window for activity classification. It was discovered that a neck-mounted design was effective at detecting the three unique behaviors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title="activity recognition">activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=antenna" title=" antenna"> antenna</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=time-frequency" title=" time-frequency"> time-frequency</a> </p> <a href="https://publications.waset.org/abstracts/194633/classification-of-coughing-and-breathing-activities-using-wearable-and-a-light-weight-dl-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">9</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">59</span> High Sensitivity Crack Detection and Locating with Optimized Spatial Wavelet Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Ghanbari%20Mardasi">A. Ghanbari Mardasi</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Wu"> N. Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Wu"> C. Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20effect" title="edge effect">edge effect</a>, <a href="https://publications.waset.org/abstracts/search?q=scale%20optimization" title=" scale optimization"> scale optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=small%20crack%20locating" title=" small crack locating"> small crack locating</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20wavelet" title=" spatial wavelet"> spatial wavelet</a> </p> <a href="https://publications.waset.org/abstracts/68932/high-sensitivity-crack-detection-and-locating-with-optimized-spatial-wavelet-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68932.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">58</span> Current Issues of Cross-Border Enforcement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G%C3%A1bor%20Kocsm%C3%A1rik">Gábor Kocsmárik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The topic of this is coercive measures against assets in which the factor of the procedure contains a foreign element. We speak of cross-border enforcement if the debtor or the property requesting enforcement or subject to enforcement is not located in the bordering country. Given that the jurisdiction of a country cannot extend beyond its borders, the cooperation of nations and the mutual recognition of their decisions are necessary to eliminate this. In addition, it is essential to create framework rules that are binding and enforceable for each country participating in the convention. During the study, some conventions between countries that are still in force will be presented, which can serve as a starting point for dealing with existing problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=law" title="law">law</a>, <a href="https://publications.waset.org/abstracts/search?q=execution" title=" execution"> execution</a>, <a href="https://publications.waset.org/abstracts/search?q=civil%20procedure%20law" title=" civil procedure law"> civil procedure law</a>, <a href="https://publications.waset.org/abstracts/search?q=international" title=" international"> international</a> </p> <a href="https://publications.waset.org/abstracts/186435/current-issues-of-cross-border-enforcement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186435.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">34</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">57</span> Scar Removal Stretegy for Fingerprint Using Diffusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20A.%20U.%20Khan">Mohammad A. U. Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Tariq%20M.%20Khan"> Tariq M. Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Yinan%20Kong"> Yinan Kong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fingerprint%20image%20enhancement" title="fingerprint image enhancement">fingerprint image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=removing%20noise" title=" removing noise"> removing noise</a>, <a href="https://publications.waset.org/abstracts/search?q=coherence" title=" coherence"> coherence</a>, <a href="https://publications.waset.org/abstracts/search?q=enhanced%20diffusion" title=" enhanced diffusion"> enhanced diffusion</a> </p> <a href="https://publications.waset.org/abstracts/19427/scar-removal-stretegy-for-fingerprint-using-diffusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19427.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">515</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">56</span> Source Separation for Global Multispectral Satellite Images Indexing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aymen%20Bouzid">Aymen Bouzid</a>, <a href="https://publications.waset.org/abstracts/search?q=Jihen%20Ben%20Smida"> Jihen Ben Smida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose to prove the importance of the application of blind source separation methods on remote sensing data in order to index multispectral images. The proposed method starts with Gabor Filtering and the application of a Blind Source Separation to get a more effective representation of the information contained on the observation images. After that, a feature vector is extracted from each image in order to index them. Experimental results show the superior performance of this approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20source%20separation" title="blind source separation">blind source separation</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20image%20retrieval" title=" content based image retrieval"> content based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction%20multispectral" title=" feature extraction multispectral"> feature extraction multispectral</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20images" title=" satellite images"> satellite images</a> </p> <a href="https://publications.waset.org/abstracts/28585/source-separation-for-global-multispectral-satellite-images-indexing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28585.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">55</span> The Way We Express vs. What We Express</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brendan%20Mooney">Brendan Mooney</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We often do not consider the quality of the way we express ourselves as being fundamental to well-being. Society focuses predominantly on what we do, not the way we do it, to our great detriment. For example, those who have experienced domestic violence often comment that it was not what was said that hurt the most but the way it was said. In other words, the quality in the way the words were used communicated far more than the actual words themselves. This is an important area of focus for practitioners who may be inclined to emphasize who said what but not bring equal, if not more, focus to the quality of one’s expression. The aim of this study is to highlight how and why the way we express ourselves is more important than what we express, which includes words and all behaviors. Given we are a sensitive species it matters to pay attention to the communication that is not said. For example, we have the ability to recognize that a person is upset or angry by the way they walk into a room, even if they do not say anything or look at anyone. Our sensitivity allows us to detect even the slightest change in another’s emotional state, irrespective of what their exterior behaviors may be exhibiting. This study will focus on the importance of recognizing the quality in the way we express as being fundamental to wellbeing, as it allows us to easily and simply navigate life and relationships without needing to experience the usual pitfalls that otherwise prevail. This research utilizes clinical experience, client observations and client feedback, and several case studies were utilized to illustrate real-life examples of the above. This study is not so much a model of life but a way of life that confirms our deepest nature, that we are incredibly sensitive and far more so than we appreciate or utilize in everyday practical human life. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=communication" title="communication">communication</a>, <a href="https://publications.waset.org/abstracts/search?q=integrity" title=" integrity"> integrity</a>, <a href="https://publications.waset.org/abstracts/search?q=quality" title=" quality"> quality</a>, <a href="https://publications.waset.org/abstracts/search?q=sensitivity" title=" sensitivity"> sensitivity</a>, <a href="https://publications.waset.org/abstracts/search?q=wellbeing" title=" wellbeing"> wellbeing</a> </p> <a href="https://publications.waset.org/abstracts/188919/the-way-we-express-vs-what-we-express" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188919.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">35</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">54</span> Absence of Vancomycin-Resistant Enterococci Amongst Urban and Rural Hooded Crows in Hungary</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Isma%20Benmazouz">Isma Benmazouz</a>, <a href="https://publications.waset.org/abstracts/search?q=B%C3%A1lint%20Joszef%20Nagy"> Bálint Joszef Nagy</a>, <a href="https://publications.waset.org/abstracts/search?q=Bence%20B%C3%A1lacs"> Bence Bálacs</a>, <a href="https://publications.waset.org/abstracts/search?q=G%C3%A1bor%20Kardos"> Gábor Kardos</a>, <a href="https://publications.waset.org/abstracts/search?q=L%C3%A1szl%C3%B3%20K%C5%91v%C3%A9r"> László Kővér</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vancomycin-resistant enterococci (VRE) are among the major nosocomial threats, which have a potential for zoonotic transmission due to the ubiquity of enterococci in the environment and in animal microbiota, e.g., wild birds. . In order to assess the prevalence in an urbanized bird species, 221 fecal samples were collected from Hooded crows (Corvus cornix) in 2020. Fecal samples were screened using VRE agar plates. None of the samples yielded VRE. The absence of VRE isolates in sampled urban hooded crows indicates that crows residing in the city do not necessarily constitute a reservoir of VREs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=resistance" title="resistance">resistance</a>, <a href="https://publications.waset.org/abstracts/search?q=crows" title=" crows"> crows</a>, <a href="https://publications.waset.org/abstracts/search?q=Enterococci" title=" Enterococci"> Enterococci</a>, <a href="https://publications.waset.org/abstracts/search?q=wild%20birds" title=" wild birds"> wild birds</a> </p> <a href="https://publications.waset.org/abstracts/146604/absence-of-vancomycin-resistant-enterococci-amongst-urban-and-rural-hooded-crows-in-hungary" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">53</span> Data Integration with Geographic Information System Tools for Rural Environmental Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tamas%20Jancso">Tamas Jancso</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrea%20Podor"> Andrea Podor</a>, <a href="https://publications.waset.org/abstracts/search?q=Eva%20Nagyne%20Hajnal"> Eva Nagyne Hajnal</a>, <a href="https://publications.waset.org/abstracts/search?q=Peter%20Udvardy"> Peter Udvardy</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20Nagy"> Gabor Nagy</a>, <a href="https://publications.waset.org/abstracts/search?q=Attila%20Varga"> Attila Varga</a>, <a href="https://publications.waset.org/abstracts/search?q=Meng%20Qingyan"> Meng Qingyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with the conditions and circumstances of integration of remotely sensed data for rural environmental monitoring purposes. The main task is to make decisions during the integration process when we have data sources with different resolution, location, spectral channels, and dimension. In order to have exact knowledge about the integration and data fusion possibilities, it is necessary to know the properties (metadata) that characterize the data. The paper explains the joining of these data sources using their attribute data through a sample project. The resulted product will be used for rural environmental analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS" title=" GIS"> GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=metadata" title=" metadata"> metadata</a>, <a href="https://publications.waset.org/abstracts/search?q=integration" title=" integration"> integration</a>, <a href="https://publications.waset.org/abstracts/search?q=environmental%20analysis" title=" environmental analysis"> environmental analysis</a> </p> <a href="https://publications.waset.org/abstracts/151549/data-integration-with-geographic-information-system-tools-for-rural-environmental-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">120</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">52</span> Highly Accurate Tennis Ball Throwing Machine with Intelligent Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ferenc%20Kov%C3%A1cs">Ferenc Kovács</a>, <a href="https://publications.waset.org/abstracts/search?q=G%C3%A1bor%20Hossz%C3%BA"> Gábor Hosszú</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents an advanced control system for tennis ball throwing machines to improve their accuracy according to the ball impact points. A further advantage of the system is the much easier calibration process involving the intelligent solution of the automatic adjustment of the stroking parameters according to the ball elasticity, the self-calibration, the use of the safety margin at very flat strokes and the possibility to placing the machine to any position of the half court. The system applies mathematical methods to determine the exact ball trajectories and special approximating processes to access all points on the aimed half court. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=control%20system" title="control system">control system</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20programming" title=" robot programming"> robot programming</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20control" title=" robot control"> robot control</a>, <a href="https://publications.waset.org/abstracts/search?q=sports%20equipment" title=" sports equipment"> sports equipment</a>, <a href="https://publications.waset.org/abstracts/search?q=throwing%20machine" title=" throwing machine"> throwing machine</a> </p> <a href="https://publications.waset.org/abstracts/36393/highly-accurate-tennis-ball-throwing-machine-with-intelligent-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36393.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">397</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">51</span> Defining of the Shape of the Spine Using Moiré Method in Case of Patients with Scheuermann Disease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Petra%20Balla">Petra Balla</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20Manhertz"> Gabor Manhertz</a>, <a href="https://publications.waset.org/abstracts/search?q=Akos%20Antal"> Akos Antal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays spinal deformities are very frequent problems among teenagers. Scheuermann disease is a one dimensional deformity of the spine, but it has prevalence over 11% of the children. A traditional technology, the moiré method was used by us for screening and diagnosing this type of spinal deformity. A LabVIEW program has been developed to evaluate the moiré pictures of patients with Scheuermann disease. Two different solutions were tested in this computer program, the extreme and the inflexion point calculation methods. Effects using these methods were compared and according to the results both solutions seemed to be appropriate. Statistical results showed better efficiency in case of the extreme search method where the average difference was only 6,09⁰. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spinal%20deformity" title="spinal deformity">spinal deformity</a>, <a href="https://publications.waset.org/abstracts/search?q=picture%20evaluation" title=" picture evaluation"> picture evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=Moir%C3%A9%20method" title=" Moiré method"> Moiré method</a>, <a href="https://publications.waset.org/abstracts/search?q=Scheuermann%20disease" title=" Scheuermann disease"> Scheuermann disease</a>, <a href="https://publications.waset.org/abstracts/search?q=curve%20detection" title=" curve detection"> curve detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Moir%C3%A9%20topography" title=" Moiré topography "> Moiré topography </a> </p> <a href="https://publications.waset.org/abstracts/11648/defining-of-the-shape-of-the-spine-using-moire-method-in-case-of-patients-with-scheuermann-disease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">50</span> Warning about the Risk of Blood Flow Stagnation after Transcatheter Aortic Valve Implantation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aymen%20Laadhari">Aymen Laadhari</a>, <a href="https://publications.waset.org/abstracts/search?q=G%C3%A1bor%20Sz%C3%A9kely"> Gábor Székely</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, the hemodynamics in the sinuses of Valsalva after Transcatheter Aortic Valve Implantation is numerically examined. We focus on the physical results in the two-dimensional case. We use a finite element methodology based on a Lagrange multiplier technique that enables to couple the dynamics of blood flow and the leaflets&rsquo; movement. A massively parallel implementation of a monolithic and fully implicit solver allows more accuracy and significant computational savings. The elastic properties of the aortic valve are disregarded, and the numerical computations are performed under physiologically correct pressure loads. Computational results depict that blood flow may be subject to stagnation in the lower domain of the sinuses of Valsalva after Transcatheter Aortic Valve Implantation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hemodynamics" title="hemodynamics">hemodynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=simulations" title=" simulations"> simulations</a>, <a href="https://publications.waset.org/abstracts/search?q=stagnation" title=" stagnation"> stagnation</a>, <a href="https://publications.waset.org/abstracts/search?q=valve" title=" valve"> valve</a> </p> <a href="https://publications.waset.org/abstracts/63534/warning-about-the-risk-of-blood-flow-stagnation-after-transcatheter-aortic-valve-implantation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63534.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">291</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">49</span> Self-Calibration of Fish-Eye Camera for Advanced Driver Assistance Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atef%20Alaaeddine%20Sarraj">Atef Alaaeddine Sarraj</a>, <a href="https://publications.waset.org/abstracts/search?q=Brendan%20Jackman"> Brendan Jackman</a>, <a href="https://publications.waset.org/abstracts/search?q=Frank%20Walsh"> Frank Walsh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tomorrow’s car will be more automated and increasingly connected. Innovative and intuitive interfaces are essential to accompany this functional enrichment. For that, today the automotive companies are competing to offer an advanced driver assistance system (ADAS) which will be able to provide enhanced navigation, collision avoidance, intersection support and lane keeping. These vision-based functions require an accurately calibrated camera. To achieve such differentiation in ADAS requires sophisticated sensors and efficient algorithms. This paper explores the different calibration methods applicable to vehicle-mounted fish-eye cameras with arbitrary fields of view and defines the first steps towards a self-calibration method that adequately addresses ADAS requirements. In particular, we present a self-calibration method after comparing different camera calibration algorithms in the context of ADAS requirements. Our method gathers data from unknown scenes while the car is moving, estimates the camera intrinsic and extrinsic parameters and corrects the wide-angle distortion. Our solution enables continuous and real-time detection of objects, pedestrians, road markings and other cars. In contrast, other camera calibration algorithms for ADAS need pre-calibration, while the presented method calibrates the camera without prior knowledge of the scene and in real-time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20driver%20assistance%20system%20%28ADAS%29" title="advanced driver assistance system (ADAS)">advanced driver assistance system (ADAS)</a>, <a href="https://publications.waset.org/abstracts/search?q=fish-eye" title=" fish-eye"> fish-eye</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=self-calibration" title=" self-calibration"> self-calibration</a> </p> <a href="https://publications.waset.org/abstracts/70853/self-calibration-of-fish-eye-camera-for-advanced-driver-assistance-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70853.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">252</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">48</span> The Climate Impact Due to Clouds and Selected Greenhouse Gases by Short Wave Upwelling Radiative Flux within Spectral Range of Space-Orbiting Argus1000 Micro-Spectrometer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rehan%20Siddiqui">Rehan Siddiqui</a>, <a href="https://publications.waset.org/abstracts/search?q=Brendan%20Quine"> Brendan Quine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Radiance Enhancement (RE) and integrated absorption technique is applied to develop a synthetic model to determine the enhancement in radiance due to cloud scene and Shortwave upwelling Radiances (SHupR) by O2, H2O, CO2 and CH4. This new model is used to estimate the magnitude variation for RE and SHupR over spectral range of 900 nm to 1700 nm by varying surface altitude, mixing ratios and surface reflectivity. In this work, we employ satellite real observation of space orbiting Argus 1000 especially for O2, H2O, CO2 and CH4 together with synthetic model by using line by line GENSPECT radiative transfer model. All the radiative transfer simulations have been performed by varying over a different range of percentages of water vapor contents and carbon dioxide with the fixed concentration oxygen and methane. We calculate and compare both the synthetic and real measured observed data set of different week per pass of Argus flight. Results are found to be comparable for both approaches, after allowing for the differences with the real and synthetic technique. The methodology based on RE and SHupR of the space spectral data can be promising for the instant and reliable classification of the cloud scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radiance%20enhancement" title="radiance enhancement">radiance enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=radiative%20transfer" title=" radiative transfer"> radiative transfer</a>, <a href="https://publications.waset.org/abstracts/search?q=shortwave%20upwelling%20radiative%20flux" title=" shortwave upwelling radiative flux"> shortwave upwelling radiative flux</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud%20reflectivity" title=" cloud reflectivity"> cloud reflectivity</a>, <a href="https://publications.waset.org/abstracts/search?q=greenhouse%20gases" title=" greenhouse gases"> greenhouse gases</a> </p> <a href="https://publications.waset.org/abstracts/38435/the-climate-impact-due-to-clouds-and-selected-greenhouse-gases-by-short-wave-upwelling-radiative-flux-within-spectral-range-of-space-orbiting-argus1000-micro-spectrometer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38435.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">47</span> Comparison of Quality Indices for Sediment Assessment in Ireland</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tayyaba%20%20Bibi">Tayyaba Bibi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jenny%20%20Ronan"> Jenny Ronan</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20%20Hernan"> Robert Hernan</a>, <a href="https://publications.waset.org/abstracts/search?q=Kathleen%20%20O%E2%80%99Rourke"> Kathleen O’Rourke</a>, <a href="https://publications.waset.org/abstracts/search?q=Brendan%20%20McHugh"> Brendan McHugh</a>, <a href="https://publications.waset.org/abstracts/search?q=Evin%20%20McGovern"> Evin McGovern</a>, <a href="https://publications.waset.org/abstracts/search?q=Michelle%20%20Giltrap"> Michelle Giltrap</a>, <a href="https://publications.waset.org/abstracts/search?q=Gordon%20Chambers"> Gordon Chambers</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Wilson"> James Wilson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sediment contamination is a major source of ecosystem stress and has received significant attention from the scientific community. Both the Water Framework Directive (WFD) and Marine Strategy Framework Directive (MSFD) require a robust set of tools for biological and chemical monitoring. For the MSFD in particular, causal links between contaminant and effects need to be assessed. Appropriate assessment tools are required in order to make an accurate evaluation. In this study, a range of recommended sediment bioassays and chemical measurements are assessed in a number of potentially impacted and lowly impacted locations around Ireland. Previously, assessment indices have been developed on individual compartments, i.e. contaminant levels or biomarker/bioassay responses. A number of assessment indices are applied to chemical and ecotoxicological data from the Seachange project (Project code) and compared including the metal pollution index (MPI), pollution load index (PLI) and Chapman index for chemistry as well as integrated biomarker response (IBR). The benefits and drawbacks of the use of indices and aggregation techniques are discussed. In addition to this, modelling of raw data is investigated to analyse links between contaminant and effects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bioassays" title="bioassays">bioassays</a>, <a href="https://publications.waset.org/abstracts/search?q=contamination%20indices" title=" contamination indices"> contamination indices</a>, <a href="https://publications.waset.org/abstracts/search?q=ecotoxicity" title=" ecotoxicity"> ecotoxicity</a>, <a href="https://publications.waset.org/abstracts/search?q=marine%20environment" title=" marine environment"> marine environment</a>, <a href="https://publications.waset.org/abstracts/search?q=sediments" title=" sediments"> sediments</a> </p> <a href="https://publications.waset.org/abstracts/84860/comparison-of-quality-indices-for-sediment-assessment-in-ireland" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84860.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">228</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">46</span> Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahma%20Abed">Rahma Abed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahbi%20Bahroun"> Sahbi Bahroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Ezzeddine%20Zagrouba"> Ezzeddine Zagrouba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keyframe%20extraction" title="keyframe extraction">keyframe extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20quality%20assessment" title=" face quality assessment"> face quality assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20in%20video%20recognition" title=" face in video recognition"> face in video recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/111347/keyframe-extraction-using-face-quality-assessment-and-convolution-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111347.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">45</span> The Use of Microorganisms in the Bioleaching of Soils Polluted with Heavy Metals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=I.%20M.%20Sur">I. M. Sur</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20M.%20Chirila-Babau"> A. M. Chirila-Babau</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Gabor"> T. Gabor</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Micle"> V. Micle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper shows researches in order to extract Cr, Cu and Ni from the polluted soils. Research is based on preliminary studies regarding the usage of <em>Thiobacillus ferrooxidans</em> bacterium (9K medium) for bioleaching of soil polluted with heavy metal (Cu, Cr and Ni). The microorganisms (<em>Thiobacillus ferooxidans</em>) selected directly from polluted soil samples were used in this experimental work. Soil samples used in the experimental research were taken from an area polluted with heavy metals from Romania. The soil samples are subjected to the cleaning process using the 9K medium solution (20 mL and 40 mL, respectively), stirred 200 rpm for 20 hours at a controlled temperature (30 ˚C). During the experiment (0, 2, 4, 8 and 20 h), liquid samples have been extracted and analyzed using the Atomic Absorption Spectrophotometer AA-6800 (AAS) in order to determine the Cr, Cu and Ni concentration. Experiments led to the conclusion that these soils can be depolluted by bioleaching, being a biological treatment method involving the use of microorganisms to favor the extraction of Cr, Cu and Ni from polluted soils. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bioleaching" title="bioleaching">bioleaching</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=microorganisms" title=" microorganisms"> microorganisms</a>, <a href="https://publications.waset.org/abstracts/search?q=soil" title=" soil"> soil</a>, <a href="https://publications.waset.org/abstracts/search?q=polluted" title=" polluted"> polluted</a>, <a href="https://publications.waset.org/abstracts/search?q=Thiobacillus%20ferooxidans" title=" Thiobacillus ferooxidans"> Thiobacillus ferooxidans</a> </p> <a href="https://publications.waset.org/abstracts/91874/the-use-of-microorganisms-in-the-bioleaching-of-soils-polluted-with-heavy-metals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">44</span> A 3Y/3Y Pole-Changing Winding of High-Power Asynchronous Motors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G%C3%A1bor%20Kov%C3%A1cs">Gábor Kovács</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Requirement for pole-changing motors emerged at the very early times of asynchronous motor design. Different solutions have been elaborated and some of them are generally used. An alternative is the so called 3 Y/3 Y pole-changing winding. This paper deals with high power application of this solution. A complete and comprehensive study is introduced, including features and design guidelines. The method presented in this paper is especially suitable for pole numbers being close to each other. The study also reveals that the method is more advantageous then the existing solutions for high power motors with 1:3 pole ratio. Using this motor, a new and complete drive supply system has been proposed as most appropriate arrangement of high power main naval propulsion drive. Further, the method makes possible to extend the pole ratio to 1:6, 1:9, 1:12, etc. At the end, the proposal is further extended to the here so far missing 1:4, 1:5, 1:7 etc. pole ratios. A complete proposal for the theoretically infinite range has been given in this way. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=induction%20motor" title="induction motor">induction motor</a>, <a href="https://publications.waset.org/abstracts/search?q=pole%20changing%203Y%2F3Y" title=" pole changing 3Y/3Y"> pole changing 3Y/3Y</a>, <a href="https://publications.waset.org/abstracts/search?q=pole%20phase%20modulation" title=" pole phase modulation"> pole phase modulation</a>, <a href="https://publications.waset.org/abstracts/search?q=pole%20changing%201%3A3" title=" pole changing 1:3"> pole changing 1:3</a>, <a href="https://publications.waset.org/abstracts/search?q=1%3A6" title=" 1:6"> 1:6</a> </p> <a href="https://publications.waset.org/abstracts/80062/a-3y3y-pole-changing-winding-of-high-power-asynchronous-motors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80062.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ranz%20Brendan%20D.%20Gabor&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ranz%20Brendan%20D.%20Gabor&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ranz%20Brendan%20D.%20Gabor&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10