CINXE.COM
Search results for: blind deconvolution
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: blind deconvolution</title> <meta name="description" content="Search results for: blind deconvolution"> <meta name="keywords" content="blind deconvolution"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="blind deconvolution" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="blind deconvolution"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 325</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: blind deconvolution</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">325</span> Digital Recording System Identification Based on Audio File</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michel%20Kulhandjian">Michel Kulhandjian</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimitris%20A.%20Pados"> Dimitris A. Pados</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification" title="blind system identification">blind system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20fingerprinting" title=" audio fingerprinting"> audio fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title=" blind deconvolution"> blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20dereverberation" title=" blind dereverberation"> blind dereverberation</a> </p> <a href="https://publications.waset.org/abstracts/75122/digital-recording-system-identification-based-on-audio-file" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">324</span> Arbitrarily Shaped Blur Kernel Estimation for Single Image Blind Deblurring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aftab%20Khan">Aftab Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashfaq%20Khan"> Ashfaq Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research paper focuses on an interesting challenge faced in Blind Image Deblurring (BID). It relates to the estimation of arbitrarily shaped or non-parametric Point Spread Functions (PSFs) of motion blur caused by camera handshake. These PSFs exhibit much more complex shapes than their parametric counterparts and deblurring in this case requires intricate ways to estimate the blur and effectively remove it. This research work introduces a novel blind deblurring scheme visualized for deblurring images corrupted by arbitrarily shaped PSFs. It is based on Genetic Algorithm (GA) and utilises the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) measure as the fitness function for arbitrarily shaped PSF estimation. The proposed BID scheme has been compared with other single image motion deblurring schemes as benchmark. Validation has been carried out on various blurred images. Results of both benchmark and real images are presented. Non-reference image quality measures were used to quantify the deblurring results. For benchmark images, the proposed BID scheme using BRISQUE converges in close vicinity of the original blurring functions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title="blind deconvolution">blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20image%20deblurring" title=" blind image deblurring"> blind image deblurring</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20restoration" title=" image restoration"> image restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measures" title=" image quality measures"> image quality measures</a> </p> <a href="https://publications.waset.org/abstracts/37142/arbitrarily-shaped-blur-kernel-estimation-for-single-image-blind-deblurring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">443</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">323</span> New Iterative Algorithm for Improving Depth Resolution in Ionic Analysis: Effect of Iterations Number</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Dahraoui">N. Dahraoui</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Boulakroune"> M. Boulakroune</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Benatia"> D. Benatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the improvement by deconvolution of the depth resolution in Secondary Ion Mass Spectrometry (SIMS) analysis is considered. Indeed, we have developed a new Tikhonov-Miller deconvolution algorithm where a priori model of the solution is included. This is a denoisy and pre-deconvoluted signal obtained from: firstly, by the application of wavelet shrinkage algorithm, secondly by the introduction of the obtained denoisy signal in an iterative deconvolution algorithm. In particular, we have focused the light on the effect of the iterations number on the evolution of the deconvoluted signals. The SIMS profiles are multilayers of Boron in Silicon matrix. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DRF" title="DRF">DRF</a>, <a href="https://publications.waset.org/abstracts/search?q=in-depth%20resolution" title=" in-depth resolution"> in-depth resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=multiresolution%20deconvolution" title=" multiresolution deconvolution"> multiresolution deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=SIMS" title=" SIMS"> SIMS</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20shrinkage" title=" wavelet shrinkage"> wavelet shrinkage</a> </p> <a href="https://publications.waset.org/abstracts/22225/new-iterative-algorithm-for-improving-depth-resolution-in-ionic-analysis-effect-of-iterations-number" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22225.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">322</span> A Goms Model for Blind Users Website Navigation </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Suraina%20Sulong">Suraina Sulong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Keyboard support is one of the main accessibility requirements for web pages and web applications for blind user. But it is not sufficient that the blind user can perform all actions on the page using the keyboard. In addition, designers of web sites or web applications have to make sure that keyboard users can use their pages with acceptable performance. We present GOMS models for navigation in web pages with specific task given to the blind user to accomplish. These models can be used to construct the user model for accessible website. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GOMS%20analysis" title="GOMS analysis">GOMS analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=usability%20factor" title=" usability factor"> usability factor</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20user" title=" blind user"> blind user</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a> </p> <a href="https://publications.waset.org/abstracts/128408/a-goms-model-for-blind-users-website-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">321</span> Detection of Image Blur and Its Restoration for Image Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image restoration in the process of communication is one of the emerging fields in the image processing. The motion analysis processing is the simplest case to detect motion in an image. Applications of motion analysis widely spread in many areas such as surveillance, remote sensing, film industry, navigation of autonomous vehicles, etc. The scene may contain multiple moving objects, by using motion analysis techniques the blur caused by the movement of the objects can be enhanced by filling-in occluded regions and reconstruction of transparent objects, and it also removes the motion blurring. This paper presents the design and comparison of various motion detection and enhancement filters. Median filter, Linear image deconvolution, Inverse filter, Pseudoinverse filter, Wiener filter, Lucy Richardson filter and Blind deconvolution filters are used to remove the blur. In this work, we have considered different types and different amount of blur for the analysis. Mean Square Error (MSE) and Peak Signal to Noise Ration (PSNR) are used to evaluate the performance of the filters. The designed system has been implemented in Matlab software and tested for synthetic and real-time images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20analysis" title=" motion analysis"> motion analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title=" motion detection"> motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20estimation" title=" motion estimation"> motion estimation</a> </p> <a href="https://publications.waset.org/abstracts/59485/detection-of-image-blur-and-its-restoration-for-image-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">287</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">320</span> Interactive Fun Activities for Blind and Sighted Teenagers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haif%20Alharthy">Haif Alharthy</a>, <a href="https://publications.waset.org/abstracts/search?q=Samar%20Altarteer"> Samar Altarteer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Blind and sighted teenagers might find it challenging to communicate and have fun interaction with each other. The previous studies emphasize the importance of the interactive communication of the blind with the sighted people in developing the interpersonal and social skills of the blind people . Playing games is one of the effective ways used to engage the blind with the sighted people and help in enhancing their social skills. However, it is difficult to find a fun game that is designed to encourage interaction between blind and sighted teenagers in which the blind can play it independently without help and that the sighted find its design attractive and satisfying. The aim of this paper is to examine how challenging is to have fun interaction between blind and sighted people and offer interactive tabletop game solution in which both of them can independently participate and enjoy. The paper discusses the importance and the impact of the interactive fun communication between blind and sighted people and how to get them involved with each other through games. The paper investigates several approaches to design a universal game. A survey was conducted for blind teenager’s family members to discover what difficulties they face while playing and communicating with their blind family member and to identify the blind’s needs and interests in games. The study reveals that although families like to play tabletop games with their blind member, they find difficulties in finding universal games that is interesting and adequate for both. Also, qualitative interviews were conducted with blind teenager shows the sufficiency in tabletop games that do not require help from another family member to play the game. The results suggested that an effective approach is to develop an interactive tabletop game embedded with audio and tactile techniques. The findings of the pilot study highlighted the necessary information such as tools, visuals, and game concepts that should be considered in developing interactive card game for blind and sighted teenagers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Blind" title="Blind">Blind</a>, <a href="https://publications.waset.org/abstracts/search?q=card%20game" title=" card game"> card game</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction" title=" interaction"> interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=play" title=" play"> play</a>, <a href="https://publications.waset.org/abstracts/search?q=tabletop%20game" title=" tabletop game"> tabletop game</a>, <a href="https://publications.waset.org/abstracts/search?q=teenager" title=" teenager"> teenager</a> </p> <a href="https://publications.waset.org/abstracts/90816/interactive-fun-activities-for-blind-and-sighted-teenagers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">209</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">319</span> Comparing Friction Force Between Track and Spline Using graphite, Mos2, PTFE, and Silicon Dry Lubricant</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20De%20Maaijer">M. De Maaijer</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenxuan%20Shi"> Wenxuan Shi</a>, <a href="https://publications.waset.org/abstracts/search?q="></a>, <a href="https://publications.waset.org/abstracts/search?q=Dolores%20Pose">Dolores Pose</a>, <a href="https://publications.waset.org/abstracts/search?q=Ditmar"> Ditmar</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Barati"> F. Barati</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Friction has several detrimental effects on Blind performance, Therefore Ziptak company as the leading company in the blind manufacturing sector, start investigating on how to conquer this problem in next generation of blinds. This problem is more sever in extremely sever condition. Although in these condition Ziptrak suggest not to use the blind, working on blind and its associated parts was the priority of Ziptrak company. The purpose of this article is to measure the effects of lubrication process on reducing friction force between spline and track especially at windy conditions Four different lubricants were implicated to measure their efficiency on reducing friction force. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=libricant" title="libricant">libricant</a>, <a href="https://publications.waset.org/abstracts/search?q=ziptrak" title=" ziptrak"> ziptrak</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=spline" title=" spline"> spline</a> </p> <a href="https://publications.waset.org/abstracts/163058/comparing-friction-force-between-track-and-spline-using-graphite-mos2-ptfe-and-silicon-dry-lubricant" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">318</span> Teachers’ Perceptions on Communicating with Students Who Are Deaf-Blind in Regular Classes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Phillimon%20Mahanya">Phillimon Mahanya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learners with deaf-blindness use touch to communicate. However, teachers are not well versed with tactile communication technicalities. Lack of technical know-how is compounded with a lack of standardisation of the tactile signs the world over. Thus, this study arose from the need to have efficient and effective tactile sign communication for learners who are deaf-blind. A qualitative approach that adopted a case study design was used. A sample of 22 participants comprising school administrators and teachers was purposively drawn from the institutions that enrolled learners who are deaf-blind. Data generated using semi-structured interviews, non-participant observations and document analysis were thematically analysed. It emerged that administrators and teachers used mammoth and solo touches that are not standardised to communicate with learners who are deaf-blind. It was recommended that there should be a standardised tactile sign manual in Zimbabwe to promote the inclusion of learners who are deaf-blind. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=communication" title="communication">communication</a>, <a href="https://publications.waset.org/abstracts/search?q=deaf-blind" title=" deaf-blind"> deaf-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=signing" title=" signing"> signing</a>, <a href="https://publications.waset.org/abstracts/search?q=tactile" title=" tactile"> tactile</a> </p> <a href="https://publications.waset.org/abstracts/142698/teachers-perceptions-on-communicating-with-students-who-are-deaf-blind-in-regular-classes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142698.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">238</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">317</span> Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chidiebere%20C.%20Agoha">Chidiebere C. Agoha</a>, <a href="https://publications.waset.org/abstracts/search?q=Chukwuebuka%20N.%20Onwubuariri"> Chukwuebuka N. Onwubuariri</a>, <a href="https://publications.waset.org/abstracts/search?q=Collins%20U.amasike"> Collins U.amasike</a>, <a href="https://publications.waset.org/abstracts/search?q=Tochukwu%20I.%20Mgbeojedo"> Tochukwu I. Mgbeojedo</a>, <a href="https://publications.waset.org/abstracts/search?q=Joy%20O.%20Njoku"> Joy O. Njoku</a>, <a href="https://publications.waset.org/abstracts/search?q=Lawson%20J.%20Osaki"> Lawson J. Osaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Ifeyinwa%20J.%20Ofoh"> Ifeyinwa J. Ofoh</a>, <a href="https://publications.waset.org/abstracts/search?q=Francis%20B.%20Akiang"> Francis B. Akiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Dominic%20N.%20Anuforo"> Dominic N. Anuforo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Euler%20deconvolution" title="Euler deconvolution">Euler deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=horizontal%20derivative" title=" horizontal derivative"> horizontal derivative</a>, <a href="https://publications.waset.org/abstracts/search?q=Obudu" title=" Obudu"> Obudu</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20indices" title=" structural indices"> structural indices</a> </p> <a href="https://publications.waset.org/abstracts/171696/aeromagnetic-data-interpretation-and-source-body-evaluation-using-standard-euler-deconvolution-technique-in-obudu-area-southeastern-nigeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171696.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">316</span> Rehabilitation of the Blind Using Sono-Visualization Tool</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashwani%20Kumar">Ashwani Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In human beings, eyes play a vital role. A very less research has been done for rehabilitation of blindness for the blind people. This paper discusses the work that helps blind people for recognizing the basic shapes of the objects like circle, square, triangle, horizontal lines, vertical lines, diagonal lines and the wave forms like sinusoidal, square, triangular etc. This is largely achieved by using a digital camera, which is used to capture the visual information present in front of the blind person and a software program, which achieves the image processing operations, and finally the processed image is converted into sound. After the sound generation process, the generated sound is fed to the blind person through headphones for visualizing the imaginary image of the object. For visualizing the imaginary image of the object, it needs to train the blind person. Various training process methods had been applied for recognizing the object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel" title=" pixel"> pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=pitch" title=" pitch"> pitch</a>, <a href="https://publications.waset.org/abstracts/search?q=loudness" title=" loudness"> loudness</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20generation" title=" sound generation"> sound generation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=brightness" title=" brightness"> brightness</a> </p> <a href="https://publications.waset.org/abstracts/14606/rehabilitation-of-the-blind-using-sono-visualization-tool" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">315</span> Buddha Images in Mudras Representing Days of a Week: Tactile Texture Design for the Blind</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chantana%20Insra">Chantana Insra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research “Buddha Images in Mudras Representing Days of a Week: Tactile Texture Design for the Blind” aims to provide original tactile format to institutions for the blind, as supplementary textbooks, to accumulate Buddhist knowledge, so that it could be extracurricular learning. The research studied on 33 students with both total and partial blindness, the latter with the ability to read Braille’s signs, of elementary 4 – 6, who are pursuing their studies on the second semester of the academic year 2013 at Bangkok School for the Blind. The researcher opted samples specifically, studied data acquired from both documents and fieldworks. Those methods must be related to the blind, tactile format production, and Buddha images in mudras representing days of a week. Afterwards, the formats will be analyzed and designed so that there would be 8 format pictures of Buddha images in mudras representing days of the week. Experts will next evaluate the media and try out. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind" title="blind">blind</a>, <a href="https://publications.waset.org/abstracts/search?q=tactile%20texture" title=" tactile texture"> tactile texture</a>, <a href="https://publications.waset.org/abstracts/search?q=Thai%20Buddha%20images" title=" Thai Buddha images"> Thai Buddha images</a>, <a href="https://publications.waset.org/abstracts/search?q=Mudras" title=" Mudras"> Mudras</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20design" title=" texture design"> texture design</a> </p> <a href="https://publications.waset.org/abstracts/17352/buddha-images-in-mudras-representing-days-of-a-week-tactile-texture-design-for-the-blind" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17352.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">351</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">314</span> Identification of Author and Reviewer from Single and Double Blind Paper</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jatinderkumar%20R.%20Saini">Jatinderkumar R. Saini</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikita.%20R.%20Sonthalia"> Nikita. R. Sonthalia</a>, <a href="https://publications.waset.org/abstracts/search?q=Khushbu.%20A.%20Dodiya"> Khushbu. A. Dodiya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Research leads to development of science and technology and hence to the betterment of humankind. Journals and conferences provide a platform to receive large number of research papers for publications and presentations before the expert and scientific community. In order to assure quality of such papers, they are also sent to reviewers for their comments. In order to maintain good ethical standards, the research papers are sent to reviewers in such a way that they do not know each other’s identity. This technique is called double-blind review process. It is called single-blind review process, if identity of any one party (generally authors) is disclosed to the other. This paper presents the techniques by which identity of author as well as reviewer could be made out even through double-blind review process. It is proposed that the characteristics and techniques presented here will help journals and conferences in assuring intentional or unintentional disclosure of identity revealing information by either party to the other. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=author" title="author">author</a>, <a href="https://publications.waset.org/abstracts/search?q=conference" title=" conference"> conference</a>, <a href="https://publications.waset.org/abstracts/search?q=double%20blind%20paper" title=" double blind paper"> double blind paper</a>, <a href="https://publications.waset.org/abstracts/search?q=journal" title=" journal"> journal</a>, <a href="https://publications.waset.org/abstracts/search?q=reviewer" title=" reviewer"> reviewer</a>, <a href="https://publications.waset.org/abstracts/search?q=single%20blind%20paper" title=" single blind paper"> single blind paper</a> </p> <a href="https://publications.waset.org/abstracts/3915/identification-of-author-and-reviewer-from-single-and-double-blind-paper" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">350</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">313</span> Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao%20Cheng">Yao Cheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Weihua%20Zhang"> Weihua Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bearing" title="bearing">bearing</a>, <a href="https://publications.waset.org/abstracts/search?q=complete%20ensemble%20empirical%20mode%20decomposition%20with%20adaptive%20noise" title=" complete ensemble empirical mode decomposition with adaptive noise"> complete ensemble empirical mode decomposition with adaptive noise</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20diagnosis" title=" fault diagnosis"> fault diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=multipoint%20optimal%20minimum%20entropy%20deconvolution" title=" multipoint optimal minimum entropy deconvolution"> multipoint optimal minimum entropy deconvolution</a> </p> <a href="https://publications.waset.org/abstracts/80335/application-of-complete-ensemble-empirical-mode-decomposition-with-adaptive-noise-and-multipoint-optimal-minimum-entropy-deconvolution-in-railway-bearings-fault-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80335.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">374</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">312</span> The Convolution Recurrent Network of Using Residual LSTM to Process the Output of the Downsampling for Monaural Speech Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shibo%20Wei">Shibo Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Ting%20Jiang"> Ting Jiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional-recurrent neural networks (CRN) have achieved much success recently in the speech enhancement field. The common processing method is to use the convolution layer to compress the feature space by multiple upsampling and then model the compressed features with the LSTM layer. At last, the enhanced speech is obtained by deconvolution operation to integrate the global information of the speech sequence. However, the feature space compression process may cause the loss of information, so we propose to model the upsampling result of each step with the residual LSTM layer, then join it with the output of the deconvolution layer and input them to the next deconvolution layer, by this way, we want to integrate the global information of speech sequence better. The experimental results show the network model (RES-CRN) we introduce can achieve better performance than LSTM without residual and overlaying LSTM simply in the original CRN in terms of scale-invariant signal-to-distortion ratio (SI-SNR), speech quality (PESQ), and intelligibility (STOI). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional-recurrent%20neural%20networks" title="convolutional-recurrent neural networks">convolutional-recurrent neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20enhancement" title=" speech enhancement"> speech enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=residual%20LSTM" title=" residual LSTM"> residual LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=SI-SNR" title=" SI-SNR"> SI-SNR</a> </p> <a href="https://publications.waset.org/abstracts/141010/the-convolution-recurrent-network-of-using-residual-lstm-to-process-the-output-of-the-downsampling-for-monaural-speech-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141010.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">311</span> Evolution of Multimodulus Algorithm Blind Equalization Based on Recursive Least Square Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sardar%20Ameer%20Akram%20Khan">Sardar Ameer Akram Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahzad%20Amin%20Sheikh"> Shahzad Amin Sheikh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Blind equalization is an important technique amongst equalization family. Multimodulus algorithms based on blind equalization removes the undesirable effects of ISI and cater ups the phase issues, saving the cost of rotator at the receiver end. In this paper a new algorithm combination of recursive least square and Multimodulus algorithm named as RLSMMA is proposed by providing few assumption, fast convergence and minimum Mean Square Error (MSE) is achieved. The excellence of this technique is shown in the simulations presenting MSE plots and the resulting filter results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20equalizations" title="blind equalizations">blind equalizations</a>, <a href="https://publications.waset.org/abstracts/search?q=constant%20modulus%20algorithm" title=" constant modulus algorithm"> constant modulus algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-modulus%20algorithm" title=" multi-modulus algorithm"> multi-modulus algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=recursive%20%20least%20square%20algorithm" title=" recursive least square algorithm"> recursive least square algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=quadrature%20amplitude%20modulation%20%28QAM%29" title=" quadrature amplitude modulation (QAM)"> quadrature amplitude modulation (QAM)</a> </p> <a href="https://publications.waset.org/abstracts/24704/evolution-of-multimodulus-algorithm-blind-equalization-based-on-recursive-least-square-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24704.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">644</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">310</span> Operator Optimization Based on Hardware Architecture Alignment Requirements</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qingqing%20Gai">Qingqing Gai</a>, <a href="https://publications.waset.org/abstracts/search?q=Junxing%20Shen"> Junxing Shen</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Luo"> Yu Luo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the hardware architecture characteristics, some operators tend to acquire better performance if the input/output tensor dimensions are aligned to a certain minimum granularity, such as convolution and deconvolution commonly used in deep learning. Furthermore, if the requirements are not met, the general strategy is to pad with 0 to satisfy the requirements, potentially leading to the under-utilization of the hardware resources. Therefore, for the convolution and deconvolution whose input and output channels do not meet the minimum granularity alignment, we propose to transfer the W-dimensional data to the C-dimension for computation (W2C) to enable the C-dimension to meet the hardware requirements. This scheme also reduces the number of computations in the W-dimension. Although this scheme substantially increases computation, the operator’s speed can improve significantly. It achieves remarkable speedups on multiple hardware accelerators, including Nvidia Tensor cores, Qualcomm digital signal processors (DSPs), and Huawei neural processing units (NPUs). All you need to do is modify the network structure and rearrange the operator weights offline without retraining. At the same time, for some operators, such as the Reducemax, we observe that transferring the Cdimensional data to the W-dimension(C2W) and replacing the Reducemax with the Maxpool can accomplish acceleration under certain circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolution" title="convolution">convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deconvolution" title=" deconvolution"> deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=W2C" title=" W2C"> W2C</a>, <a href="https://publications.waset.org/abstracts/search?q=C2W" title=" C2W"> C2W</a>, <a href="https://publications.waset.org/abstracts/search?q=alignment" title=" alignment"> alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=hardware%20accelerator" title=" hardware accelerator"> hardware accelerator</a> </p> <a href="https://publications.waset.org/abstracts/157366/operator-optimization-based-on-hardware-architecture-alignment-requirements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157366.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">309</span> Improvement of Analysis Vertical Oil Exploration Wells (Case Study)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azza%20Hashim%20Abbas">Azza Hashim Abbas</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Rosli%20Wan%20Suliman"> Wan Rosli Wan Suliman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The old school of study, well testing reservoir engineers used the transient pressure analyses to get certain parameters and variable factors on the reservoir's physical properties, such as, (permeability-thickness). Recently, the difficulties facing the newly discovered areas are the convincing fact that the exploration and production (E&p) team should have sufficiently accurate and appropriate data to work with due to different sources of errors. The well-test analyst does the work without going through well-informed and reliable data from colleagues which may consequently cause immense environmental damage and unnecessary financial losses as well as opportunity losses to the project. In 2003, new potential oil field (Moga) face circulation problem well-22 was safely completed. However the high mud density had caused an extensive damage to the nearer well area which also distracted the hypothetical oil rate of flow that was not representive of the real reservoir characteristics This paper presents methods to analyze and interpret the production rate and pressure data of an oil field. Specifically for Well- 22 using the Deconvolution technique to enhance the transient pressure .Applying deconvolution to get the best range of certainty of results needed for the next subsequent operation. The range determined and analysis of skin factor range was reasonable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=well%20testing" title="well testing">well testing</a>, <a href="https://publications.waset.org/abstracts/search?q=exploration" title=" exploration"> exploration</a>, <a href="https://publications.waset.org/abstracts/search?q=deconvolution" title=" deconvolution"> deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20factor" title=" skin factor"> skin factor</a>, <a href="https://publications.waset.org/abstracts/search?q=un%20certainity" title=" un certainity"> un certainity</a> </p> <a href="https://publications.waset.org/abstracts/29312/improvement-of-analysis-vertical-oil-exploration-wells-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29312.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">445</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">308</span> Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hai%20Quang%20Hong%20Dam">Hai Quang Hong Dam</a>, <a href="https://publications.waset.org/abstracts/search?q=Hai%20Ho"> Hai Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Minh%20Hoang%20Le%20Ngo"> Minh Hoang Le Ngo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20speech%20separation" title="blind speech separation">blind speech separation</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detector" title=" voice activity detector"> voice activity detector</a>, <a href="https://publications.waset.org/abstracts/search?q=SRP-PHAT" title=" SRP-PHAT"> SRP-PHAT</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20beamformer" title=" optimal beamformer"> optimal beamformer</a> </p> <a href="https://publications.waset.org/abstracts/53263/blind-speech-separation-using-srp-phat-localization-and-optimal-beamformer-in-two-speaker-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">307</span> Gas-Liquid Flow Regimes in Vertical Venturi Downstream of Horizontal Blind-Tee</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Alif%20Bin%20Razali">Muhammad Alif Bin Razali</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng-Gang%20Xie"> Cheng-Gang Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Wai%20Lam%20Loh"> Wai Lam Loh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A venturi device is commonly used as an integral part of a multiphase flowmeter (MPFM) in real-time oil-gas production monitoring. For an accurate determination of individual phase fraction and flowrate, a gas-liquid flow ideally needs to be well mixed in the venturi measurement section. Partial flow mixing is achieved by installing a venturi vertically downstream of the blind-tee pipework that ‘homogenizes’ the incoming horizontal gas-liquid flow. In order to study in-depth the flow-mixing effect of the blind-tee, gas-liquid flows are captured at blind-tee and venturi sections by using a high-speed video camera and a purpose-built transparent test rig, over a wide range of superficial liquid velocities (0.3 to 2.4m/s) and gas volume fractions (10 to 95%). Electrical capacitance sensors are built to measure the instantaneous holdup (of oil-gas flows) at the venturi inlet and throat. Flow regimes and flow (a)symmetry are investigated based on analyzing the statistical features of capacitance sensors’ holdup time-series data and of the high-speed video time-stacked images. The perceived homogenization effect of the blind-tee on the incoming intermittent horizontal flow regimes is found to be relatively small across the tested flow conditions. A horizontal (blind-tee) to vertical (venturi) flow-pattern transition map is proposed based on gas and liquid mass fluxes (weighted by the Baker parameters). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind-tee" title="blind-tee">blind-tee</a>, <a href="https://publications.waset.org/abstracts/search?q=flow%20visualization" title=" flow visualization"> flow visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=gas-liquid%20two-phase%20flow" title=" gas-liquid two-phase flow"> gas-liquid two-phase flow</a>, <a href="https://publications.waset.org/abstracts/search?q=MPFM" title=" MPFM"> MPFM</a> </p> <a href="https://publications.waset.org/abstracts/129335/gas-liquid-flow-regimes-in-vertical-venturi-downstream-of-horizontal-blind-tee" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129335.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">306</span> Understanding the Experience of the Visually Impaired towards a Multi-Sensorial Architectural Design</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sarah%20M.%20Oteifa">Sarah M. Oteifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Lobna%20A.%20Sherif"> Lobna A. Sherif</a>, <a href="https://publications.waset.org/abstracts/search?q=Yasser%20M.%20Mostafa"> Yasser M. Mostafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visually impaired people, in their daily lives, face struggles and spatial barriers because the built environment is often designed with an extreme focus on the visual element, causing what is called architectural visual bias or ocularcentrism. The aim of the study is to holistically understand the world of the visually impaired as an attempt to extract the qualities of space that accommodate their needs, and to show the importance of multi-sensory, holistic designs for the blind. Within the framework of existential phenomenology, common themes are reached through "intersubjectivity": experience descriptions by blind people and blind architects, observation of how blind children learn to perceive their surrounding environment, and a personal lived blind-folded experience are analyzed. The extracted themes show how visually impaired people filter out and prioritize tactile (active, passive and dynamic touch), acoustic and olfactory spatial qualities respectively, and how this happened during the personal lived blind folded experience. The themes clarify that haptic and aural inclusive designs are essential to create environments suitable for the visually impaired to empower them towards an independent, safe and efficient life. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=architecture" title="architecture">architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=architectural%20ocularcentrism" title=" architectural ocularcentrism"> architectural ocularcentrism</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-sensory%20design" title=" multi-sensory design"> multi-sensory design</a>, <a href="https://publications.waset.org/abstracts/search?q=visually%20impaired" title=" visually impaired"> visually impaired</a> </p> <a href="https://publications.waset.org/abstracts/72324/understanding-the-experience-of-the-visually-impaired-towards-a-multi-sensorial-architectural-design" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72324.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">202</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">305</span> The Implementation of Special Grammar Circle (Spegraci) as the Media Innovation for Blind People to Learn English Tenses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aji%20Budi%20Rinekso">Aji Budi Rinekso</a>, <a href="https://publications.waset.org/abstracts/search?q=Revika%20Niza%20Artiyana"> Revika Niza Artiyana</a>, <a href="https://publications.waset.org/abstracts/search?q=Lisa%20Widayanti"> Lisa Widayanti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> English is one of the international languages in the world. People use this language to communicate with each other in the international forums, international events or international organizations. As same as other languages, English has a rule which is called grammar. Grammar is the part of english which has a role as the language systems. In grammar, there are tenses which provide a time period system for past, present and future. Sometimes it is difficult for some English learner to remember all of the tenses completely. Especially for those with special needs or exceptional children with vision restrictiveness. The aims of this research are 1) To know the design of Special Grammar Circle (Spegraci) as the media for blind people to learn english grammar. 2) To know the work of Special Gramar Circle (Spegraci) as the media for blind people to learn english grammar. 3) To know the function of this device in increasing tenses ability for blind people. The method of this research is Research and Development which consists of several testing and revision of this device. The implementation of Special Grammar Circle (Spegraci) is to make blind people easily to learn the tenses. This device is easy to use. Users only roll this device and find out the tense formula and match to the name of the formula in braille. In addition, this device also enables to be used by normal people because normal written texts are also provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20people" title="blind people">blind people</a>, <a href="https://publications.waset.org/abstracts/search?q=media%20innovation" title=" media innovation"> media innovation</a>, <a href="https://publications.waset.org/abstracts/search?q=spegraci" title=" spegraci"> spegraci</a>, <a href="https://publications.waset.org/abstracts/search?q=tenses" title=" tenses"> tenses</a> </p> <a href="https://publications.waset.org/abstracts/38099/the-implementation-of-special-grammar-circle-spegraci-as-the-media-innovation-for-blind-people-to-learn-english-tenses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38099.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">295</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">304</span> Source Separation for Global Multispectral Satellite Images Indexing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aymen%20Bouzid">Aymen Bouzid</a>, <a href="https://publications.waset.org/abstracts/search?q=Jihen%20Ben%20Smida"> Jihen Ben Smida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose to prove the importance of the application of blind source separation methods on remote sensing data in order to index multispectral images. The proposed method starts with Gabor Filtering and the application of a Blind Source Separation to get a more effective representation of the information contained on the observation images. After that, a feature vector is extracted from each image in order to index them. Experimental results show the superior performance of this approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20source%20separation" title="blind source separation">blind source separation</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20image%20retrieval" title=" content based image retrieval"> content based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction%20multispectral" title=" feature extraction multispectral"> feature extraction multispectral</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20images" title=" satellite images"> satellite images</a> </p> <a href="https://publications.waset.org/abstracts/28585/source-separation-for-global-multispectral-satellite-images-indexing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28585.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">303</span> A Study of the Tactile Codification on the Philippine Banknote: Redesigning for the Blind</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ace%20Mari%20S.%20Simbajon">Ace Mari S. Simbajon</a>, <a href="https://publications.waset.org/abstracts/search?q=Rhaella%20J.%20Yba%C3%B1ez"> Rhaella J. Ybañez</a>, <a href="https://publications.waset.org/abstracts/search?q=Mae%20G.%20Nadela"> Mae G. Nadela</a>, <a href="https://publications.waset.org/abstracts/search?q=Cherry%20E.%20Sagun"> Cherry E. Sagun</a>, <a href="https://publications.waset.org/abstracts/search?q=Nera%20Mae%20A.%20Puyo"> Nera Mae A. Puyo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study determined the usability of the Philippine banknotes. An experimental design was used in the study involving twenty (n=20) randomly selected blind participants. The three aspects of usability were measured: effectiveness, efficiency, and satisfaction. It was found out that the effectiveness rate of the current Philippine Banknotes ranges from 20 percent to 35 percent which means it is not effective basing from Cauro’s threshold of average effectiveness rate which is 78 percent. Its efficiency rate is ranging from 18.06 to 26.22 seconds per denomination. The average satisfaction rate is 1.45 which means the blind are very dissatisfied. These results were used as a guide in making the proposed tactile codification using embossed dots or embossed lines. A round of simulation was conducted with the blind to assess the usability of the two proposals. Results were then statistically treated using t-test. Results show statistically significant difference between the usability of the current banknotes versus the proposed designs. Moreover, it was found out that the use of embossed dots is more effective, more efficient, and more satisfying than the embossed lines with an effectiveness rate ranging from 90 percent to 100 percent, efficiency rate ranging from 6.73 seconds to 12.99 seconds, and satisfaction rate of 3.4 which means the blind are very satisfied. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind" title="blind">blind</a>, <a href="https://publications.waset.org/abstracts/search?q=Philippine%20banknotes" title=" Philippine banknotes"> Philippine banknotes</a>, <a href="https://publications.waset.org/abstracts/search?q=tactile%20codification" title=" tactile codification"> tactile codification</a>, <a href="https://publications.waset.org/abstracts/search?q=usability" title=" usability"> usability</a> </p> <a href="https://publications.waset.org/abstracts/71453/a-study-of-the-tactile-codification-on-the-philippine-banknote-redesigning-for-the-blind" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71453.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">302</span> Magnetic Survey for the Delineation of Concrete Pillars in Geotechnical Investigation for Site Characterization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nuraddeen%20Usman">Nuraddeen Usman</a>, <a href="https://publications.waset.org/abstracts/search?q=Khiruddin%20Abdullah"> Khiruddin Abdullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Nawawi"> Mohd Nawawi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Khalil%20Ismail"> Amin Khalil Ismail</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A magnetic survey is carried out in order to locate the remains of construction items, specifically concrete pillars. The conventional Euler deconvolution technique can perform the task but it requires the use of fixed structural index (SI) and the construction items are made of materials with different shapes which require different SI (unknown). A Euler deconvolution technique that estimate background, horizontal coordinate (xo and yo), depth and structural index (SI) simultaneously is prepared and used for this task. The synthetic model study carried indicated the new methodology can give a good estimate of location and does not depend on magnetic latitude. For field data, both the total magnetic field and gradiometer reading had been collected simultaneously. The computed vertical derivatives and gradiometer readings are compared and they have shown good correlation signifying the effectiveness of the method. The filtering is carried out using automated procedure, analytic signal and other traditional techniques. The clustered depth solutions coincided with the high amplitude/values of analytic signal and these are the possible target positions of the concrete pillars being sought. The targets under investigation are interpreted to be located at the depth between 2.8 to 9.4 meters. More follow up survey is recommended as this mark the preliminary stage of the work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=concrete%20pillar" title="concrete pillar">concrete pillar</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetic%20survey" title=" magnetic survey"> magnetic survey</a>, <a href="https://publications.waset.org/abstracts/search?q=geotechnical%20investigation" title=" geotechnical investigation"> geotechnical investigation</a>, <a href="https://publications.waset.org/abstracts/search?q=Euler%20Deconvolution" title=" Euler Deconvolution"> Euler Deconvolution</a> </p> <a href="https://publications.waset.org/abstracts/70560/magnetic-survey-for-the-delineation-of-concrete-pillars-in-geotechnical-investigation-for-site-characterization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70560.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">258</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">301</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">300</span> Identification and Classification of Entrepreneurial Opportunities in Blinds’ Tourism Industry in Khuzestan Province of Iran</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Kharazi">Ali Kharazi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassanali%20Aghajani"> Hassanali Aghajani</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesami%20Azizi"> Hesami Azizi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tourism entrepreneurship is a growing field that has the potential to create new opportunities for sustainable development. The purpose of this study is to identify and classify the entrepreneurial opportunities in the blind tourism industry in Khuzestan Province of Iran that can be created through the operation of blinds’ tours. This study used a mixed methods approach. The qualitative data was collected through semi-structured interviews with 15 tourist guides and tourism activists, while the quantitative data was collected through a questionnaire survey of 40 blind people who had participated in blinds’ tours. The findings of this study suggest that there are a number of entrepreneurial opportunities in the blind tourism industry in Khuzestan Province, including (1) developing and providing accessible tourism services, such as tours, accommodations, restaurants, and transportation, (2) creating and marketing blind-friendly tourism products and experiences (3) training and educating tourism professionals on how to provide accessible and inclusive tourism services. This study contributes to the theoretical understanding of tourism entrepreneurship by providing insights into the entrepreneurial opportunities in the blind tourism industry. The findings of this study can be used to develop policies and programs that support the development of the blind tourism industry. The qualitative data were analyzed using content analysis. The quantitative data were analyzed using descriptive statistics and inferential statistics. This study examines the entrepreneurial opportunities within the blind tourism industry in Khuzestan Province, Iran. In addition, Khuzestan province has made relatively good development in the field of blinds’ tourism. Blind tourists have become loyal customers of blinds’ tours, which has increased their self-confidence and social participation. Tourist guides and centers of tourism services are interested in participating in blinds’ tours more than before, and even other parts outside the tourism field have encouraged sponsorship. Education had a great impact on the quality of tourism services, especially for the blind. It has played a significant role in improving the quality of tourism services for the blind. However, the quality and quantity of infrastructure should be increased in different sectors of tourism services to foster future growth. These opportunities can be used to create new businesses and jobs and to promote sustainable development in the region. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=entrepreneurship" title="entrepreneurship">entrepreneurship</a>, <a href="https://publications.waset.org/abstracts/search?q=tourism" title=" tourism"> tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=sustainable%20development" title=" sustainable development"> sustainable development</a>, <a href="https://publications.waset.org/abstracts/search?q=Khuzestan" title=" Khuzestan"> Khuzestan</a> </p> <a href="https://publications.waset.org/abstracts/179051/identification-and-classification-of-entrepreneurial-opportunities-in-blinds-tourism-industry-in-khuzestan-province-of-iran" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">299</span> The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ning%20Chang">Ning Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zelong%20Yuan"> Zelong Yuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunpeng%20Wang"> Yunpeng Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianchun%20Wang"> Jianchun Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deconvolution%20model" title="deconvolution model">deconvolution model</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20eddy%20simulation" title=" large eddy simulation"> large eddy simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=subfilter%20scale%20modeling" title=" subfilter scale modeling"> subfilter scale modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=turbulence" title=" turbulence"> turbulence</a> </p> <a href="https://publications.waset.org/abstracts/171846/the-direct-deconvolution-model-for-the-large-eddy-simulation-of-turbulence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171846.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">298</span> A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Kalhori">Hamed Kalhori</a>, <a href="https://publications.waset.org/abstracts/search?q=Lin%20Ye"> Lin Ye</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=honeycomb%20composite%20panel" title="honeycomb composite panel">honeycomb composite panel</a>, <a href="https://publications.waset.org/abstracts/search?q=deconvolution" title=" deconvolution"> deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=impact%20localization" title=" impact localization"> impact localization</a>, <a href="https://publications.waset.org/abstracts/search?q=force%20reconstruction" title=" force reconstruction"> force reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/30671/a-study-on-inverse-determination-of-impact-force-on-a-honeycomb-composite-panel" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30671.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">297</span> Exploiting Fast Independent Component Analysis Based Algorithm for Equalization of Impaired Baseband Received Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Umair">Muhammad Umair</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Qasim%20Gilani"> Syed Qasim Gilani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A technique using Independent Component Analysis (ICA) for blind receiver signal processing is investigated. The problem of the receiver signal processing is viewed as of signal equalization and implementation imperfections compensation. Based on this, a model similar to a general ICA problem is developed for the received signal. Then, the use of ICA technique for blind signal equalization in the time domain is presented. The equalization is regarded as a signal separation problem, since the desired signal is separated from interference terms. This problem is addressed in the paper by over-sampling of the received signal. By using ICA for equalization, besides channel equalization, other transmission imperfections such as Direct current (DC) bias offset, carrier phase and In phase Quadrature phase imbalance will also be corrected. Simulation results for a system using 16-Quadraure Amplitude Modulation(QAM) are presented to show the performance of the proposed scheme. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20equalization" title="blind equalization">blind equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20signal%20separation" title=" blind signal separation"> blind signal separation</a>, <a href="https://publications.waset.org/abstracts/search?q=equalization" title=" equalization"> equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=independent%20component%20analysis" title=" independent component analysis"> independent component analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=transmission%20impairments" title=" transmission impairments"> transmission impairments</a>, <a href="https://publications.waset.org/abstracts/search?q=QAM%20receiver" title=" QAM receiver"> QAM receiver</a> </p> <a href="https://publications.waset.org/abstracts/94433/exploiting-fast-independent-component-analysis-based-algorithm-for-equalization-of-impaired-baseband-received-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94433.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">214</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">296</span> Blind Super-Resolution Reconstruction Based on PSF Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Osama%20A.%20Omer">Osama A. Omer</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20Hamed"> Amal Hamed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind" title="blind">blind</a>, <a href="https://publications.waset.org/abstracts/search?q=PSF" title=" PSF"> PSF</a>, <a href="https://publications.waset.org/abstracts/search?q=super-resolution" title=" super-resolution"> super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=knife-edge" title=" knife-edge"> knife-edge</a>, <a href="https://publications.waset.org/abstracts/search?q=blurring" title=" blurring"> blurring</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral" title=" bilateral"> bilateral</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20norm" title=" L1 norm"> L1 norm</a> </p> <a href="https://publications.waset.org/abstracts/1385/blind-super-resolution-reconstruction-based-on-psf-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1385.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=10">10</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=11">11</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>