CINXE.COM

Search results for: bidirectional microphone

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: bidirectional microphone</title> <meta name="description" content="Search results for: bidirectional microphone"> <meta name="keywords" content="bidirectional microphone"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="bidirectional microphone" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="bidirectional microphone"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 145</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: bidirectional microphone</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">145</span> Switched Uses of a Bidirectional Microphone as a Microphone and Sensors with High Gain and Wide Frequency Range</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Toru%20Shionoya">Toru Shionoya</a>, <a href="https://publications.waset.org/abstracts/search?q=Yosuke%20Kurihara"> Yosuke Kurihara</a>, <a href="https://publications.waset.org/abstracts/search?q=Takashi%20Kaburagi"> Takashi Kaburagi</a>, <a href="https://publications.waset.org/abstracts/search?q=Kajiro%20Watanabe"> Kajiro Watanabe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mass-produced bidirectional microphones have attractive characteristics. They work as a microphone as well as a sensor with high gain over a wide frequency range; they are also highly reliable and economical. We present novel multiple functional uses of the microphones. A mathematical model for explaining the high-pass-filtering characteristics of bidirectional microphones was presented. Based on the model, the characteristics of the microphone were investigated, and a novel use for the microphone as a sensor with a wide frequency range was presented. In this study, applications for using the microphone as a security sensor and a human biosensor were introduced. The mathematical model was validated through experiments, and the feasibility of the abovementioned applications for security monitoring and the biosignal monitoring were examined through experiments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20microphone" title="bidirectional microphone">bidirectional microphone</a>, <a href="https://publications.waset.org/abstracts/search?q=low-frequency" title=" low-frequency"> low-frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20model" title=" mathematical model"> mathematical model</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20response" title=" frequency response"> frequency response</a> </p> <a href="https://publications.waset.org/abstracts/17138/switched-uses-of-a-bidirectional-microphone-as-a-microphone-and-sensors-with-high-gain-and-wide-frequency-range" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17138.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">545</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">144</span> Evaluation Using a Bidirectional Microphone as a Pressure Pulse Wave Meter</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shunsuke%20Fujiwara">Shunsuke Fujiwara</a>, <a href="https://publications.waset.org/abstracts/search?q=Takashi%20Kaburagi"> Takashi Kaburagi</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazuyuki%20Kobayashi"> Kazuyuki Kobayashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Kajiro%20Watanabe"> Kajiro Watanabe</a>, <a href="https://publications.waset.org/abstracts/search?q=Yosuke%20Kurihara"> Yosuke Kurihara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes a novel sensor device, a pressure pulse wave meter, which uses a bidirectional condenser microphone. The microphone work as a microphone as well as a sensor with high gain over a wide frequency range; they are also highly reliable and economical. Currently aging is becoming a serious social issue in Japan causing increased medical expenses in the country. Hence, it is important for elderly citizens to check health condition at home, and to care the health conditions through daily monitoring. Given this circumstances, we developed a novel pressure pulse wave meter based on a bidirectional condenser microphone. This novel pressure pulse wave meter device is used as a measuring instrument of health conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20microphone" title="bidirectional microphone">bidirectional microphone</a>, <a href="https://publications.waset.org/abstracts/search?q=pressure%20pulse%20wave%20meter" title=" pressure pulse wave meter"> pressure pulse wave meter</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20condition" title=" health condition"> health condition</a>, <a href="https://publications.waset.org/abstracts/search?q=novel%20sensor%20device" title=" novel sensor device"> novel sensor device</a> </p> <a href="https://publications.waset.org/abstracts/28575/evaluation-using-a-bidirectional-microphone-as-a-pressure-pulse-wave-meter" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28575.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">554</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">143</span> Speech Enhancement Using Kalman Filter in Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eng.%20Alaa%20K.%20Satti%20Salih">Eng. Alaa K. Satti Salih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autoregressive%20process" title="autoregressive process">autoregressive process</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20speech" title=" noise speech"> noise speech</a> </p> <a href="https://publications.waset.org/abstracts/7182/speech-enhancement-using-kalman-filter-in-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">142</span> Comparison of Direction of Arrival Estimation Method for Drone Based on Phased Microphone Array</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiwon%20Lee">Jiwon Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Yeong-Ju%20Go"> Yeong-Ju Go</a>, <a href="https://publications.waset.org/abstracts/search?q=Jong-Soo%20Choi"> Jong-Soo Choi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drones were first developed for military use and were used in World War 1. But recently drones have been used in a variety of fields. Several companies actively utilize drone technology to strengthen their services, and in agriculture, drones are used for crop monitoring and sowing. Other people use drones for hobby activities such as photography. However, as the range of use of drones expands rapidly, problems caused by drones such as improperly flying, privacy and terrorism are also increasing. As the need for monitoring and tracking of drones increases, researches are progressing accordingly. The drone detection system estimates the position of the drone using the physical phenomena that occur when the drones fly. The drone detection system measures being developed utilize many approaches, such as radar, infrared camera, and acoustic detection systems. Among the various drone detection system, the acoustic detection system is advantageous in that the microphone array system is small, inexpensive, and easy to operate than other systems. In this paper, the acoustic signal is acquired by using minimum microphone when drone is flying, and direction of drone is estimated. When estimating the Direction of Arrival(DOA), there is a method of calculating the DOA based on the Time Difference of Arrival(TDOA) and a method of calculating the DOA based on the beamforming. The TDOA technique requires less number of microphones than the beamforming technique, but is weak in noisy environments and can only estimate the DOA of a single source. The beamforming technique requires more microphones than the TDOA technique. However, it is strong against the noisy environment and it is possible to simultaneously estimate the DOA of several drones. When estimating the DOA using acoustic signals emitted from the drone, it is impossible to measure the position of the drone, and only the direction can be estimated. To overcome this problem, in this work we show how to estimate the position of drones by arranging multiple microphone arrays. The microphone array used in the experiments was four tetrahedral microphones. We simulated the performance of each DOA algorithm and demonstrated the simulation results through experiments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20sensing" title="acoustic sensing">acoustic sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=direction%20of%20arrival" title=" direction of arrival"> direction of arrival</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20detection" title=" drone detection"> drone detection</a>, <a href="https://publications.waset.org/abstracts/search?q=microphone%20array" title=" microphone array"> microphone array</a> </p> <a href="https://publications.waset.org/abstracts/94230/comparison-of-direction-of-arrival-estimation-method-for-drone-based-on-phased-microphone-array" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">141</span> A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jangje%20Park">Jangje Park</a>, <a href="https://publications.waset.org/abstracts/search?q=So%20Young%20Kim"> So Young Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio%20frequency" title="audio frequency">audio frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=buzz%20noise" title=" buzz noise"> buzz noise</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20bounce" title=" ground bounce"> ground bounce</a>, <a href="https://publications.waset.org/abstracts/search?q=microphone%20grounding" title=" microphone grounding"> microphone grounding</a> </p> <a href="https://publications.waset.org/abstracts/150713/a-study-on-the-improvement-of-mobile-device-call-buzz-noise-caused-by-audio-frequency-ground-bounce" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150713.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">140</span> A 3kW Grid Connected Residential Energy Storage System with PV and Li-Ion Battery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Moiz%20Masood%20Syed">Moiz Masood Syed</a>, <a href="https://publications.waset.org/abstracts/search?q=Seong-Jun%20Hong"> Seong-Jun Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Geun-Hie%20Rim"> Geun-Hie Rim</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyung-Ae%20Cho"> Kyung-Ae Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Suk%20Kim"> Hyoung-Suk Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the near future, energy storage will play a vital role to enhance the present changing technology. Energy storage with power generation becomes necessary when renewable energy sources are connected to the grid which consequently adjoins to the total energy in the system since utilities require more power when peak demand occurs. This paper describes the operational function of a 3 kW grid-connected residential Energy Storage System (ESS) which is connected with Photovoltaic (PV) at its input side. The system can perform bidirectional functions of charging from the grid and discharging to the grid when power demand becomes high and low respectively. It consists of PV module, Power Conditioning System (PCS) containing a bidirectional DC/DC Converter and bidirectional DC/AC inverter and a Lithium-ion battery pack. ESS Configuration, specifications, and control are described. The bidirectional DC/DC converter tracks the maximum power point (MPPT) and maintains the stability of PV array in case of power deficiency to fulfill the load requirements. The bidirectional DC/AC inverter has good voltage regulation properties like low total harmonic distortion (THD), low electromagnetic interference (EMI), faster response and anti-islanding characteristics. Experimental results satisfy the effectiveness of the proposed system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=energy%20storage%20system" title="energy storage system">energy storage system</a>, <a href="https://publications.waset.org/abstracts/search?q=photovoltaic" title=" photovoltaic"> photovoltaic</a>, <a href="https://publications.waset.org/abstracts/search?q=DC%2FDC%20converter" title=" DC/DC converter"> DC/DC converter</a>, <a href="https://publications.waset.org/abstracts/search?q=DC%2FAC%20inverter" title=" DC/AC inverter"> DC/AC inverter</a> </p> <a href="https://publications.waset.org/abstracts/20075/a-3kw-grid-connected-residential-energy-storage-system-with-pv-and-li-ion-battery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20075.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">641</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">139</span> A Simulation-Based Study of Dust Ingression into Microphone of Indoor Consumer Electronic Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhichao%20Song">Zhichao Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Swanand%20Vaidya"> Swanand Vaidya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, most portable (e.g., smartphones) and wearable (e.g., smartwatches and earphones) consumer hardware are designed to be dustproof following IP5 or IP6 ratings to ensure the product is able to handle potentially dusty outdoor environments. On the other hand, the design guideline is relatively vague for indoor devices (e.g., smart displays and speakers). While it is generally believed that the indoor environment is much less dusty, in certain circumstances, dust ingression is still able to cause functional failures, such as microphone frequency response shift and camera black spot, or cosmetic dissatisfaction, mainly the dust build up in visible pockets and gaps which is hard to clean. In this paper, we developed a simulation methodology to analyze dust settlement and ingression into known ports of a device. A closed system is initialized with dust particles whose sizes follow Weibull distribution based on data collected in a user study, and dust particle movement was approximated as a settlement in stationary fluid, which is governed by Stokes’ law. Following this method, we simulated dust ingression into MEMS microphone through the acoustic port and protective mesh. Various design and environmental parameters are evaluated including mesh pore size, acoustic port depth-to-diameter ratio, mass density of dust material and inclined angle of microphone port. Although the dependencies of dust resistance on these parameters are all monotonic, smaller mesh pore size, larger acoustic depth-to-opening ratio and more inclined microphone placement (towards horizontal direction) are preferred for dust resistance; these preferences may represent certain trade-offs in audio performance and compromise in industrial design. The simulation results suggest the quantitative ranges of these parameters, with more pronounced effects in the improvement of dust resistance. Based on the simulation results, we proposed several design guidelines that intend to achieve an overall balanced design from audio performance, dust resistance, and flexibility in industrial design. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dust%20settlement" title="dust settlement">dust settlement</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20simulation" title=" numerical simulation"> numerical simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=microphone%20design" title=" microphone design"> microphone design</a>, <a href="https://publications.waset.org/abstracts/search?q=Weibull%20distribution" title=" Weibull distribution"> Weibull distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=Stoke%27s%20equation" title=" Stoke&#039;s equation"> Stoke&#039;s equation</a> </p> <a href="https://publications.waset.org/abstracts/147233/a-simulation-based-study-of-dust-ingression-into-microphone-of-indoor-consumer-electronic-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147233.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">138</span> Preparation on Sentimental Analysis on Social Media Comments with Bidirectional Long Short-Term Memory Gated Recurrent Unit and Model Glove in Portuguese</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leonardo%20Alfredo%20Mendoza">Leonardo Alfredo Mendoza</a>, <a href="https://publications.waset.org/abstracts/search?q=Cristian%20Munoz"> Cristian Munoz</a>, <a href="https://publications.waset.org/abstracts/search?q=Marco%20Aurelio%20Pacheco"> Marco Aurelio Pacheco</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoela%20Kohler"> Manoela Kohler</a>, <a href="https://publications.waset.org/abstracts/search?q=Evelyn%20%20Batista"> Evelyn Batista</a>, <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20Moura"> Rodrigo Moura</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Natural Language Processing (NLP) techniques are increasingly more powerful to be able to interpret the feelings and reactions of a person to a product or service. Sentiment analysis has become a fundamental tool for this interpretation but has few applications in languages other than English. This paper presents a classification of sentiment analysis in Portuguese with a base of comments from social networks in Portuguese. A word embedding's representation was used with a 50-Dimension GloVe pre-trained model, generated through a corpus completely in Portuguese. To generate this classification, the bidirectional long short-term memory and bidirectional Gated Recurrent Unit (GRU) models are used, reaching results of 99.1%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20processing%20language" title="natural processing language">natural processing language</a>, <a href="https://publications.waset.org/abstracts/search?q=sentiment%20analysis" title=" sentiment analysis"> sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20long%20short-term%20memory" title=" bidirectional long short-term memory"> bidirectional long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=BI-LSTM" title=" BI-LSTM"> BI-LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=gated%20recurrent%20unit" title=" gated recurrent unit"> gated recurrent unit</a>, <a href="https://publications.waset.org/abstracts/search?q=GRU" title=" GRU"> GRU</a> </p> <a href="https://publications.waset.org/abstracts/131061/preparation-on-sentimental-analysis-on-social-media-comments-with-bidirectional-long-short-term-memory-gated-recurrent-unit-and-model-glove-in-portuguese" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131061.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">137</span> Tensile Properties of 3D Printed PLA under Unidirectional and Bidirectional Raster Angle: A Comparative Study </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shilpesh%20R.%20Rajpurohit">Shilpesh R. Rajpurohit</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshit%20K.%20Dave"> Harshit K. Dave</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fused deposition modeling (FDM) gains popularity in recent times, due to its capability to create prototype as well as functional end use product directly from CAD file. Parts fabricated using FDM process have mechanical properties comparable with those of injection-molded parts. However, performance of the FDM part is severally affected by the poor mechanical properties of the part due to nature of layered structure of printed part. Mechanical properties of the part can be improved by proper selection of process variables. In the present study, a comparative study between unidirectional and bidirectional raster angle has been carried out at a combination of different layer height and raster width. Unidirectional raster angle varied at five different levels, and bidirectional raster angle has been varied at three different levels. Fabrication of tensile specimen and tensile testing of specimen has been conducted according to ASTM D638 standard. From the results, it can be observed that higher tensile strength has been obtained at 0° raster angle followed by 45°/45° raster angle, while lower tensile strength has been obtained at 90° raster angle. Analysis of fractured surface revealed that failure takes place along with raster deposition direction for unidirectional and zigzag failure can be observed for bidirectional raster angle. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=additive%20manufacturing" title="additive manufacturing">additive manufacturing</a>, <a href="https://publications.waset.org/abstracts/search?q=fused%20deposition%20modeling" title=" fused deposition modeling"> fused deposition modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=unidirectional" title=" unidirectional"> unidirectional</a>, <a href="https://publications.waset.org/abstracts/search?q=bidirectional" title=" bidirectional"> bidirectional</a>, <a href="https://publications.waset.org/abstracts/search?q=raster%20angle" title=" raster angle"> raster angle</a>, <a href="https://publications.waset.org/abstracts/search?q=tensile%20strength" title=" tensile strength"> tensile strength</a> </p> <a href="https://publications.waset.org/abstracts/86885/tensile-properties-of-3d-printed-pla-under-unidirectional-and-bidirectional-raster-angle-a-comparative-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86885.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">136</span> The Grand Unified Theory of Bidirectional Spacetime with Spatial Covariance and Wave-Particle Duality in Spacetime Flow Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tory%20Erickson">Tory Erickson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The "Bidirectional Spacetime with Spatial Covariance and Wave-Particle Duality in Spacetime Flow" (BST-SCWPDF) Model introduces a framework aimed at unifying general relativity (GR) and quantum mechanics (QM). By proposing a concept of bidirectional spacetime, this model suggests that time can flow in more than one direction, thus offering a perspective on temporal dynamics. Integrated with spatial covariance and wave-particle duality in spacetime flow, the BST-SCWPDF Model resolves long-standing discrepancies between GR and QM. This unified theory has profound implications for quantum gravity, potentially offering insights into quantum entanglement, the collapse of the wave function, and the fabric of spacetime itself. The Bidirectional Spacetime with Spatial Covariance and Wave-Particle Duality in Spacetime Flow" (BST-SCWPDF) Model offers researchers a framework for a better understanding of theoretical physics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=astrophysics" title="astrophysics">astrophysics</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20mechanics" title=" quantum mechanics"> quantum mechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=general%20relativity" title=" general relativity"> general relativity</a>, <a href="https://publications.waset.org/abstracts/search?q=unification%20theory" title=" unification theory"> unification theory</a>, <a href="https://publications.waset.org/abstracts/search?q=theoretical%20physics" title=" theoretical physics"> theoretical physics</a> </p> <a href="https://publications.waset.org/abstracts/183765/the-grand-unified-theory-of-bidirectional-spacetime-with-spatial-covariance-and-wave-particle-duality-in-spacetime-flow-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">86</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">135</span> Digital Recording System Identification Based on Audio File</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michel%20Kulhandjian">Michel Kulhandjian</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimitris%20A.%20Pados"> Dimitris A. Pados</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification" title="blind system identification">blind system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=audio%20fingerprinting" title=" audio fingerprinting"> audio fingerprinting</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title=" blind deconvolution"> blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20dereverberation" title=" blind dereverberation"> blind dereverberation</a> </p> <a href="https://publications.waset.org/abstracts/75122/digital-recording-system-identification-based-on-audio-file" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">134</span> A Context-Centric Chatbot for Cryptocurrency Using the Bidirectional Encoder Representations from Transformers Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qitao%20Xie">Qitao Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Qingquan%20Zhang"> Qingquan Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaofei%20Zhang"> Xiaofei Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Di%20Tian"> Di Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruixuan%20Wen"> Ruixuan Wen</a>, <a href="https://publications.waset.org/abstracts/search?q=Ting%20Zhu"> Ting Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ping%20Yi"> Ping Yi</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Li"> Xin Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Inspired by the recent movement of digital currency, we are building a question answering system concerning the subject of cryptocurrency using Bidirectional Encoder Representations from Transformers (BERT). The motivation behind this work is to properly assist digital currency investors by directing them to the corresponding knowledge bases that can offer them help and increase the querying speed. BERT, one of newest language models in natural language processing, was investigated to improve the quality of generated responses. We studied different combinations of hyperparameters of the BERT model to obtain the best fit responses. Further, we created an intelligent chatbot for cryptocurrency using BERT. A chatbot using BERT shows great potential for the further advancement of a cryptocurrency market tool. We show that the BERT neural networks generalize well to other tasks by applying it successfully to cryptocurrency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20encoder%20representations%20from%20transformers" title="bidirectional encoder representations from transformers">bidirectional encoder representations from transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=BERT" title=" BERT"> BERT</a>, <a href="https://publications.waset.org/abstracts/search?q=chatbot" title=" chatbot"> chatbot</a>, <a href="https://publications.waset.org/abstracts/search?q=cryptocurrency" title=" cryptocurrency"> cryptocurrency</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/129261/a-context-centric-chatbot-for-cryptocurrency-using-the-bidirectional-encoder-representations-from-transformers-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129261.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">133</span> Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Navid%20Daryasafar">Navid Daryasafar</a>, <a href="https://publications.waset.org/abstracts/search?q=Nima%20Farshidfar"> Nima Farshidfar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=error%20steganography" title="error steganography">error steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=unidirectional%20estimation" title=" unidirectional estimation"> unidirectional estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20estimation" title=" bidirectional estimation"> bidirectional estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=AR%20linear%20estimation" title=" AR linear estimation"> AR linear estimation</a> </p> <a href="https://publications.waset.org/abstracts/14175/estimating-lost-digital-video-frames-using-unidirectional-and-bidirectional-estimation-based-on-autoregressive-time-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14175.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">540</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">132</span> Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G.%20Tamulevi%C4%8Dius">G. Tamulevičius</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Serackis"> A. Serackis</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Sledevi%C4%8D"> T. Sledevič</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Navakauskas"> D. Navakauskas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=transient%20noise%20pulses" title="transient noise pulses">transient noise pulses</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20reduction" title=" noise reduction"> noise reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20time%20warping" title=" dynamic time warping"> dynamic time warping</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a> </p> <a href="https://publications.waset.org/abstracts/7831/bidirectional-dynamic-time-warping-algorithm-for-the-recognition-of-isolated-words-impacted-by-transient-noise-pulses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7831.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">559</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">131</span> Distant Speech Recognition Using Laser Doppler Vibrometer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yunbin%20Deng">Yunbin Deng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=covert%20speech%20acquisition" title="covert speech acquisition">covert speech acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=distant%20speech%20recognition" title=" distant speech recognition"> distant speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=DSR" title=" DSR"> DSR</a>, <a href="https://publications.waset.org/abstracts/search?q=laser%20Doppler%20vibrometer" title=" laser Doppler vibrometer"> laser Doppler vibrometer</a>, <a href="https://publications.waset.org/abstracts/search?q=LDV" title=" LDV"> LDV</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20intelligence%20surveillance%20and%20reconnaissance" title=" speech intelligence surveillance and reconnaissance"> speech intelligence surveillance and reconnaissance</a>, <a href="https://publications.waset.org/abstracts/search?q=ISR" title=" ISR"> ISR</a> </p> <a href="https://publications.waset.org/abstracts/99091/distant-speech-recognition-using-laser-doppler-vibrometer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99091.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">130</span> An Application of Bidirectional Option Contract to Coordinate a Dyadic Fashion Apparel Supply Chain </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arnab%20Adhikari">Arnab Adhikari</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnab%20Bisi"> Arnab Bisi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the inception, the fashion apparel supply chain is facing the problem of high demand uncertainty. Often the demand volatility compels the corresponding supply chain member to incur substantial holding cost and opportunity cost in case of the overproduction and the underproduction scenario, respectively. It leads to an uncoordinated fashion apparel supply chain. There exist several scholarly works to achieve coordination in the fashion apparel supply chain by employing the different contracts such as the buyback contract, the revenue sharing contract, the option contract, and so on. Specially, the application of option contract in the apparel industry becomes prevalent with the changing global scenario. Exploration of existing literature related to the option contract reveals that most of the research works concentrate on the one direction demand adjustment i.e. either to match the demand upwards or downwards. Here, we present a holistic approach to coordinate a dyadic fashion apparel supply chain comprising one manufacturer and one retailer with the help of bidirectional option contract. We show a combination of wholesale price contract and bidirectional option contract can coordinate the under expanded supply chain. We also propose a framework that captures the variation of the apparel retailer’s order quantity and the apparel manufacturer’s production quantity with the changing exercise price for the different ranges of the option price. We analytically explore that corresponding cost parameters of the supply chain members along with the nature of demand distribution play an instrumental role in the coordination as well as the retailer’s ordering decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fashion%20apparel%20supply%20chain" title="fashion apparel supply chain">fashion apparel supply chain</a>, <a href="https://publications.waset.org/abstracts/search?q=supply%20chain%20coordination" title=" supply chain coordination"> supply chain coordination</a>, <a href="https://publications.waset.org/abstracts/search?q=wholesale%20price%20contract" title=" wholesale price contract"> wholesale price contract</a>, <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20option%20contract" title=" bidirectional option contract"> bidirectional option contract</a> </p> <a href="https://publications.waset.org/abstracts/38689/an-application-of-bidirectional-option-contract-to-coordinate-a-dyadic-fashion-apparel-supply-chain" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38689.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">441</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">129</span> Global Mittag-Leffler Stability of Fractional-Order Bidirectional Associative Memory Neural Network with Discrete and Distributed Transmission Delays</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Swati%20Tyagi">Swati Tyagi</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Abbas"> Syed Abbas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fractional-order Hopfield neural networks are generally used to model the information processing among the interacting neurons. To show the constancy of the processed information, it is required to analyze the stability of these systems. In this work, we perform Mittag-Leffler stability for the corresponding Caputo fractional-order bidirectional associative memory (BAM) neural networks with various time-delays. We derive sufficient conditions to ensure the existence and uniqueness of the equilibrium point by using the theory of topological degree theory. By applying the fractional Lyapunov method and Mittag-Leffler functions, we derive sufficient conditions for the global Mittag-Leffler stability, which further imply the global asymptotic stability of the network equilibrium. Finally, we present two suitable examples to show the effectiveness of the obtained results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20associative%20memory%20neural%20network" title="bidirectional associative memory neural network">bidirectional associative memory neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=existence%20and%20uniqueness" title=" existence and uniqueness"> existence and uniqueness</a>, <a href="https://publications.waset.org/abstracts/search?q=fractional-order" title=" fractional-order"> fractional-order</a>, <a href="https://publications.waset.org/abstracts/search?q=Lyapunov%20function" title=" Lyapunov function"> Lyapunov function</a>, <a href="https://publications.waset.org/abstracts/search?q=Mittag-Leffler%20stability" title=" Mittag-Leffler stability"> Mittag-Leffler stability</a> </p> <a href="https://publications.waset.org/abstracts/52374/global-mittag-leffler-stability-of-fractional-order-bidirectional-associative-memory-neural-network-with-discrete-and-distributed-transmission-delays" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52374.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">128</span> Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marco%20Furinghetti">Marco Furinghetti</a>, <a href="https://publications.waset.org/abstracts/search?q=Alberto%20Pavese"> Alberto Pavese</a>, <a href="https://publications.waset.org/abstracts/search?q=Michele%20Rinaldi"> Michele Rinaldi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=concave%20surface%20slider" title="concave surface slider">concave surface slider</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrum-compatibility" title=" spectrum-compatibility"> spectrum-compatibility</a>, <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20earthquake" title=" bidirectional earthquake"> bidirectional earthquake</a>, <a href="https://publications.waset.org/abstracts/search?q=base%20isolation" title=" base isolation"> base isolation</a> </p> <a href="https://publications.waset.org/abstracts/64725/design-and-assessment-of-base-isolated-structures-under-spectrum-compatible-bidirectional-earthquakes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">127</span> The Relationship between Spindle Sound and Tool Performance in Turning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Seemuang">N. Seemuang</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20McLeay"> T. McLeay</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Slatter"> T. Slatter </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Worn tools have a direct effect on the surface finish and part accuracy. Tool condition monitoring systems have been developed over a long period and used to avoid a loss of productivity resulting from using a worn tool. However, the majority of tool monitoring research has applied expensive sensing systems not suitable for production. In this work, the cutting sound in turning machine was studied using microphone. Machining trials using seven cutting conditions were conducted until the observable flank wear width (FWW) on the main cutting edge exceeded 0.4 mm. The cutting inserts were removed from the tool holder and the flank wear width was measured optically. A microphone with built-in preamplifier was used to record the machining sound of EN24 steel being face turned by a CNC lathe in a wet cutting condition using constant surface speed control. The sound was sampled at 50 kS/s and all sound signals recorded from microphone were transformed into the frequency domain by FFT in order to establish the frequency content in the audio signature that could be then used for tool condition monitoring. The extracted feature from audio signal was compared to the flank wear progression on the cutting inserts. The spectrogram reveals a promising feature, named as ‘spindle noise’, which emits from the main spindle motor of turning machine. The spindle noise frequency was detected at 5.86 kHz of regardless of cutting conditions used on this particular CNC lathe. Varying cutting speed and feed rate have an influence on the magnitude of power spectrum of spindle noise. The magnitude of spindle noise frequency alters in conjunction with the tool wear progression. The magnitude increases significantly in the transition state between steady-state wear and severe wear. This could be used as a warning signal to prepare for tool replacement or adapt cutting parameters to extend tool life. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tool%20wear" title="tool wear">tool wear</a>, <a href="https://publications.waset.org/abstracts/search?q=flank%20wear" title=" flank wear"> flank wear</a>, <a href="https://publications.waset.org/abstracts/search?q=condition%20monitoring" title=" condition monitoring"> condition monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=spindle%20noise" title=" spindle noise"> spindle noise</a> </p> <a href="https://publications.waset.org/abstracts/32232/the-relationship-between-spindle-sound-and-tool-performance-in-turning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">126</span> Bidirectional Long Short-Term Memory-Based Signal Detection for Orthogonal Frequency Division Multiplexing With All Index Modulation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahmut%20Yildirim">Mahmut Yildirim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposed the bidirectional long short-term memory (Bi-LSTM) network-aided deep learning (DL)-based signal detection for Orthogonal frequency division multiplexing with all index modulation (OFDM-AIM), namely Bi-DeepAIM. OFDM-AIM is developed to increase the spectral efficiency of OFDM with index modulation (OFDM-IM), a promising multi-carrier technique for communication systems beyond 5G. In this paper, due to its strong classification ability, Bi-LSTM is considered an alternative to the maximum likelihood (ML) algorithm, which is used for signal detection in the classical OFDM-AIM scheme. The performance of the Bi-DeepAIM is compared with LSTM network-aided DL-based OFDM-AIM (DeepAIM) and classic OFDM-AIM that uses (ML)-based signal detection via BER performance and computational time criteria. Simulation results show that Bi-DeepAIM obtains better bit error rate (BER) performance than DeepAIM and lower computation time in signal detection than ML-AIM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20long%20short-term%20memory" title="bidirectional long short-term memory">bidirectional long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=maximum%20likelihood" title=" maximum likelihood"> maximum likelihood</a>, <a href="https://publications.waset.org/abstracts/search?q=OFDM%20with%20all%20index%20modulation" title=" OFDM with all index modulation"> OFDM with all index modulation</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20detection" title=" signal detection"> signal detection</a> </p> <a href="https://publications.waset.org/abstracts/183512/bidirectional-long-short-term-memory-based-signal-detection-for-orthogonal-frequency-division-multiplexing-with-all-index-modulation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183512.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">125</span> Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Luis%20Alvarado">Luis Alvarado</a>, <a href="https://publications.waset.org/abstracts/search?q=Victor%20Poblete"> Victor Poblete</a>, <a href="https://publications.waset.org/abstracts/search?q=Isaac%20Gonzalez"> Isaac Gonzalez</a>, <a href="https://publications.waset.org/abstracts/search?q=Yetzabeth%20Gonzalez"> Yetzabeth Gonzalez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chord%20recognition" title="chord recognition">chord recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20information%20retrieval" title=" music information retrieval"> music information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/92608/robustness-of-the-deep-chroma-extractor-and-locally-normalized-quarter-tone-filters-in-automatic-chord-estimation-under-reverberant-conditions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92608.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">124</span> Implementation of Real-Time Multiple Sound Source Localization and Separation </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jeng-Shin%20Sheu">Jeng-Shin Sheu</a>, <a href="https://publications.waset.org/abstracts/search?q=Qi-Xun%20Zheng"> Qi-Xun Zheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper mainly discusses a method of separating speech when using a microphone array without knowing the number and direction of sound sources. In recent years, there have been many studies on the method of separating signals by using masking, but most of the separation methods must be operated under the condition of a known number of sound sources. Such methods cannot be used for real-time applications. In our method, this paper uses Circular-Integrated-Cross-Spectrum to estimate the statistical histogram distribution of the direction of arrival (DOA) to obtain the number of sound sources and sound in the mixed-signal Source direction. In calculating the relevant parameters of the ring integrated cross-spectrum, the phase (Phase of the Cross-Power Spectrum) and phase rotation factors (Phase Rotation Factors) calculated by the cross power spectrum of each microphone pair are used. In the part of separating speech, it uses the DOA weighting and shielding separation method to calculate the sound source direction (DOA) according to each T-F unit (time-frequency point). The weight corresponding to each T-F unit can be used to strengthen the intensity of each sound source from the T-F unit and reduce the influence of the remaining sound sources, thereby achieving voice separation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time" title="real-time">real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrum%20analysis" title=" spectrum analysis"> spectrum analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20source%20localization" title=" sound source localization"> sound source localization</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20source%20separation" title=" sound source separation"> sound source separation</a> </p> <a href="https://publications.waset.org/abstracts/128672/implementation-of-real-time-multiple-sound-source-localization-and-separation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128672.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">123</span> A Novel Design Methodology for a 1.5 KW DC/DC Converter in EV and Hybrid EV Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farhan%20Beg">Farhan Beg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a method for the efficient implementation of a unidirectional or bidirectional DC/DC converter. The DC/DC converter is used essentially for energy exchange between the low voltage service battery and a high voltage battery commonly found in Electric Vehicle applications. In these applications, apart from cost, efficiency of design is an important characteristic. A useful way to reduce the size of electronic equipment in the electric vehicles is proposed in this paper. The technique simplifies the mechanical complexity and maximizes the energy usage using the latest converter control techniques. Moreover a bidirectional battery charger for hybrid electric vehicles is also implemented in this paper. Several simulations on the test system have been carried out in Matlab/Simulink environment. The results exemplify the robustness of the proposed design methodology in case of a 1.5 KW DC-DC converter. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DC-DC%20converters" title="DC-DC converters">DC-DC converters</a>, <a href="https://publications.waset.org/abstracts/search?q=electric%20vehicles" title=" electric vehicles"> electric vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20electronics" title=" power electronics"> power electronics</a>, <a href="https://publications.waset.org/abstracts/search?q=direct%20current%20control" title=" direct current control "> direct current control </a> </p> <a href="https://publications.waset.org/abstracts/15606/a-novel-design-methodology-for-a-15-kw-dcdc-converter-in-ev-and-hybrid-ev-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">727</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">122</span> Experimental Analysis of Structure Borne Noise in an Enclosure</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Waziralilah%20N.%20Fathiah">Waziralilah N. Fathiah</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Aminudin"> A. Aminudin</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20Alyaa%20Hashim"> U. Alyaa Hashim</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Vikneshvaran%20D.%20Shakirah%20Shukor"> T. Vikneshvaran D. Shakirah Shukor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the experimental analysis conducted on a structure borne noise in a rectangular enclosure prototype made by joining of sheet aluminum metal and plywood. The study is significant as many did not realized the annoyance caused by structural borne-noise. In this study, modal analysis is carried out to seek the structure’s behaviour in order to identify the characteristics of enclosure in frequency domain ranging from 0 Hz to 200 Hz. Here, numbers of modes are identified and the characteristic of mode shape is categorized. Modal experiment is used to diagnose the structural behaviour while microphone is used to diagnose the sound. Spectral testing is performed on the enclosure. It is acoustically excited using shaker and as it vibrates, the vibrational and noise responses sensed by tri-axis accelerometer and microphone sensors are recorded respectively. Experimental works is performed on each node lies on the gridded surface of the enclosure. Both experimental measurement is carried out simultaneously. The modal experimental results of the modal modes are validated by simulation performed using MSC Nastran software. In pursuance of reducing the structure borne-noise, mitigation method is used whereby the stiffener plates are perpendicularly placed on the sheet aluminum metal. By using this method, reduction in structure borne-noise is successfully made at the end of the study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enclosure" title="enclosure">enclosure</a>, <a href="https://publications.waset.org/abstracts/search?q=modal%20analysis" title=" modal analysis"> modal analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20analysis" title=" sound analysis"> sound analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=structure%20borne-noise" title=" structure borne-noise"> structure borne-noise</a> </p> <a href="https://publications.waset.org/abstracts/63244/experimental-analysis-of-structure-borne-noise-in-an-enclosure" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63244.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">121</span> Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ebipatei%20Victoria%20Tunyan">Ebipatei Victoria Tunyan</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20A.%20Cao"> T. A. Cao</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheol%20Young%20Ock"> Cheol Young Ock</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=subjective%20bias%20detection" title="subjective bias detection">subjective bias detection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=BERT%E2%80%93BiLSTM%E2%80%93Attention" title=" BERT–BiLSTM–Attention"> BERT–BiLSTM–Attention</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification" title=" text classification"> text classification</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a> </p> <a href="https://publications.waset.org/abstracts/133543/improving-subjective-bias-detection-using-bidirectional-encoder-representations-from-transformers-and-bidirectional-long-short-term-memory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133543.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">120</span> Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Youngsun%20Moon">Youngsun Moon</a>, <a href="https://publications.waset.org/abstracts/search?q=Yeong-Ju%20Go"> Yeong-Ju Go</a>, <a href="https://publications.waset.org/abstracts/search?q=Jong-Soo%20Choi"> Jong-Soo Choi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aeroacoustics" title="aeroacoustics">aeroacoustics</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20source%20detection" title=" acoustic source detection"> acoustic source detection</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20difference%20of%20arrival" title=" time difference of arrival"> time difference of arrival</a>, <a href="https://publications.waset.org/abstracts/search?q=direction%20of%20arrival" title=" direction of arrival"> direction of arrival</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20source%20separation" title=" blind source separation"> blind source separation</a>, <a href="https://publications.waset.org/abstracts/search?q=independent%20component%20analysis" title=" independent component analysis"> independent component analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=drone" title=" drone"> drone</a> </p> <a href="https://publications.waset.org/abstracts/94236/study-on-acoustic-source-detection-performance-improvement-of-microphone-array-installed-on-drones-using-blind-source-separation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94236.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">119</span> Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vishwanath%20Pethri%20Kamath">Vishwanath Pethri Kamath</a>, <a href="https://publications.waset.org/abstracts/search?q=Jayantha%20Gowda%20Sarapanahalli"> Jayantha Gowda Sarapanahalli</a>, <a href="https://publications.waset.org/abstracts/search?q=Vishal%20Mishra"> Vishal Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Siddhesh%20Balwant%20Bandgar"> Siddhesh Balwant Bandgar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20emotion%20recognition" title="text emotion recognition">text emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=bidirectional%20LSTM" title=" bidirectional LSTM"> bidirectional LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-head%20attention" title=" multi-head attention"> multi-head attention</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-level%20classification" title=" multi-level classification"> multi-level classification</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20word2vec%20word%20embeddings" title=" google word2vec word embeddings"> google word2vec word embeddings</a> </p> <a href="https://publications.waset.org/abstracts/148957/text-emotion-recognition-by-multi-head-attention-based-bidirectional-lstm-utilizing-multi-level-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148957.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">118</span> Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ammarah%20Irum">Ammarah Irum</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Ali%20Tahir"> Muhammad Ali Tahir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urdu%20sentiment%20analysis" title="urdu sentiment analysis">urdu sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=opinion%20mining" title=" opinion mining"> opinion mining</a>, <a href="https://publications.waset.org/abstracts/search?q=low-resource%20language" title=" low-resource language"> low-resource language</a> </p> <a href="https://publications.waset.org/abstracts/172973/document-level-sentiment-analysis-an-exploratory-case-study-of-low-resource-language-urdu" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172973.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">117</span> Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Babatunde%20Olumide%20Olawale">Babatunde Olumide Olawale</a>, <a href="https://publications.waset.org/abstracts/search?q=Oyebode%20Olumide%20Oyediran"> Oyebode Olumide Oyediran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=access%20control" title="access control">access control</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20biometrics" title=" multimodal biometrics"> multimodal biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20safe" title=" security safe"> security safe</a> </p> <a href="https://publications.waset.org/abstracts/73150/development-of-a-sequential-multimodal-biometric-system-for-web-based-physical-access-control-into-a-security-safe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73150.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">335</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">116</span> Crab Shell Waste Chitosan-Based Thin Film for Acoustic Sensor Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maydariana%20Ayuningtyas">Maydariana Ayuningtyas</a>, <a href="https://publications.waset.org/abstracts/search?q=Bambang%20Riyanto"> Bambang Riyanto</a>, <a href="https://publications.waset.org/abstracts/search?q=Akhiruddin%20Maddu"> Akhiruddin Maddu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Industrial waste of crustacean shells, such as shrimp and crab, has been considered as one of the major issues contributing to environmental pollution. The waste processing mechanisms to form new, practical substances with added value have been developed. Chitosan, a derived matter from chitin, which is obtained from crab and shrimp shells, performs prodigiously in broad range applications. A chitosan composite-based diaphragm is a new inspiration in fiber optic acoustic sensor advancement. Elastic modulus, dynamic response, and sensitivity to acoustic wave of chitosan-based composite film contribute great potentials of organic-based sound-detecting material. The objective of this research was to develop chitosan diaphragm application in fiber optic microphone system. The formulation was conducted by blending 5% polyvinyl alcohol (PVA) solution with dissolved chitosan at 0%, 1% and 2% in 1:1 ratio, respectively. Composite diaphragms were characterized for the morphological and mechanical properties to predict the desired acoustic sensor sensitivity. The composite with 2% chitosan indicated optimum performance with 242.55 µm thickness, 67.9% relative humidity, and 29-76% light transmittance. The Young’s modulus of 2%-chitosan composite material was 4.89×104 N/m2, which generated the voltage amplitude of 0.013V and performed sensitivity of 3.28 mV/Pa at 1 kHz. Based on the results above, chitosan from crustacean shell waste can be considered as a viable alternative material for fiber optic acoustic sensor sensing pad development. Further, the research in chitosan utilisation is proposed as novel optical microphone development in anthropogenic noise controlling effort for environmental and biodiversity conservation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20sensor" title="acoustic sensor">acoustic sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=chitosan" title=" chitosan"> chitosan</a>, <a href="https://publications.waset.org/abstracts/search?q=composite" title=" composite"> composite</a>, <a href="https://publications.waset.org/abstracts/search?q=crab%20shell" title=" crab shell"> crab shell</a>, <a href="https://publications.waset.org/abstracts/search?q=diaphragm" title=" diaphragm"> diaphragm</a>, <a href="https://publications.waset.org/abstracts/search?q=waste%20utilisation" title=" waste utilisation"> waste utilisation</a> </p> <a href="https://publications.waset.org/abstracts/71655/crab-shell-waste-chitosan-based-thin-film-for-acoustic-sensor-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71655.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">257</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=bidirectional%20microphone&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=bidirectional%20microphone&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=bidirectional%20microphone&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=bidirectional%20microphone&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=bidirectional%20microphone&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10