CINXE.COM
Search results for: low-density parity-check (LDPC) decoder
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: low-density parity-check (LDPC) decoder</title> <meta name="description" content="Search results for: low-density parity-check (LDPC) decoder"> <meta name="keywords" content="low-density parity-check (LDPC) decoder"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="low-density parity-check (LDPC) decoder" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="low-density parity-check (LDPC) decoder"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 45</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: low-density parity-check (LDPC) decoder</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">45</span> Analysis of Joint Source Channel LDPC Coding for Correlated Sources Transmission over Noisy Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marwa%20Ben%20Abdessalem">Marwa Ben Abdessalem</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Zribi"> Amin Zribi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ammar%20Bouall%C3%A8gue"> Ammar Bouallègue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a Joint Source Channel coding scheme based on LDPC codes is investigated. We consider two concatenated LDPC codes, one allows to compress a correlated source and the second to protect it against channel degradations. The original information can be reconstructed at the receiver by a joint decoder, where the source decoder and the channel decoder run in parallel by transferring extrinsic information. We investigate the performance of the JSC LDPC code in terms of Bit-Error Rate (BER) in the case of transmission over an Additive White Gaussian Noise (AWGN) channel, and for different source and channel rate parameters. We emphasize how JSC LDPC presents a performance tradeoff depending on the channel state and on the source correlation. We show that, the JSC LDPC is an efficient solution for a relatively low Signal-to-Noise Ratio (SNR) channel, especially with highly correlated sources. Finally, a source-channel rate optimization has to be applied to guarantee the best JSC LDPC system performance for a given channel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWGN%20channel" title="AWGN channel">AWGN channel</a>, <a href="https://publications.waset.org/abstracts/search?q=belief%20propagation" title=" belief propagation"> belief propagation</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20source%20channel%20coding" title=" joint source channel coding"> joint source channel coding</a>, <a href="https://publications.waset.org/abstracts/search?q=LDPC%20codes" title=" LDPC codes"> LDPC codes</a> </p> <a href="https://publications.waset.org/abstracts/62721/analysis-of-joint-source-channel-ldpc-coding-for-correlated-sources-transmission-over-noisy-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62721.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">44</span> High Performance Field Programmable Gate Array-Based Stochastic Low-Density Parity-Check Decoder Design for IEEE 802.3an Standard </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghania%20Zerari">Ghania Zerari</a>, <a href="https://publications.waset.org/abstracts/search?q=Abderrezak%20Guessoum"> Abderrezak Guessoum</a>, <a href="https://publications.waset.org/abstracts/search?q=Rachid%20Beguenane"> Rachid Beguenane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces high-performance architecture for fully parallel stochastic Low-Density Parity-Check (LDPC) field programmable gate array (FPGA) based LDPC decoder. The new approach is designed to decrease the decoding latency and to reduce the FPGA logic utilisation. To accomplish the target logic utilisation reduction, the routing of the proposed sub-variable node (VN) internal memory is designed to utilize one slice distributed RAM. Furthermore, a VN initialization, using the channel input probability, is achieved to enhance the decoder convergence, without extra resources and without integrating the output saturated-counters. The Xilinx FPGA implementation, of IEEE 802.3an standard LDPC code, shows that the proposed decoding approach attain high performance along with reduction of FPGA logic utilisation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low-density%20parity-check%20%28LDPC%29%20decoder" title="low-density parity-check (LDPC) decoder">low-density parity-check (LDPC) decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=stochastic%20decoding" title=" stochastic decoding"> stochastic decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=field%20programmable%20gate%20array%20%28FPGA%29" title=" field programmable gate array (FPGA)"> field programmable gate array (FPGA)</a>, <a href="https://publications.waset.org/abstracts/search?q=IEEE%20802.3an%20standard" title=" IEEE 802.3an standard"> IEEE 802.3an standard</a> </p> <a href="https://publications.waset.org/abstracts/81538/high-performance-field-programmable-gate-array-based-stochastic-low-density-parity-check-decoder-design-for-ieee-8023an-standard" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81538.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">43</span> Performance Comparison of Non-Binary RA and QC-LDPC Codes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ni%20Wenli">Ni Wenli</a>, <a href="https://publications.waset.org/abstracts/search?q=He%20Jing"> He Jing</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=non-binary%20RA%20codes" title="non-binary RA codes">non-binary RA codes</a>, <a href="https://publications.waset.org/abstracts/search?q=QC-LDPC%20codes" title=" QC-LDPC codes"> QC-LDPC codes</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20comparison" title=" performance comparison"> performance comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=BP%20algorithm" title=" BP algorithm"> BP algorithm</a> </p> <a href="https://publications.waset.org/abstracts/42170/performance-comparison-of-non-binary-ra-and-qc-ldpc-codes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42170.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">376</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42</span> Low Density Parity Check Codes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kassoul%20Ilyes">Kassoul Ilyes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The field of error correcting codes has been revolutionized by the introduction of iteratively decoded codes. Among these, LDPC codes are now a preferred solution thanks to their remarkable performance and low complexity. The binary version of LDPC codes showed even better performance, although it’s decoding introduced greater complexity. This thesis studies the performance of binary LDPC codes using simplified weighted decisions. Information is transported between a transmitter and a receiver by digital transmission systems, either by propagating over a radio channel or also by using a transmission medium such as the transmission line. The purpose of the transmission system is then to carry the information from the transmitter to the receiver as reliably as possible. These codes have not generated enough interest within the coding theory community. This forgetfulness will last until the introduction of Turbo-codes and the iterative principle. Then it was proposed to adopt Pearl's Belief Propagation (BP) algorithm for decoding these codes. Subsequently, Luby introduced irregular LDPC codes characterized by a parity check matrix. And finally, we study simplifications on binary LDPC codes. Thus, we propose a method to make the exact calculation of the APP simpler. This method leads to simplifying the implementation of the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LDPC" title="LDPC">LDPC</a>, <a href="https://publications.waset.org/abstracts/search?q=parity%20check%20matrix" title=" parity check matrix"> parity check matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=5G" title=" 5G"> 5G</a>, <a href="https://publications.waset.org/abstracts/search?q=BER" title=" BER"> BER</a>, <a href="https://publications.waset.org/abstracts/search?q=SNR" title=" SNR"> SNR</a> </p> <a href="https://publications.waset.org/abstracts/145269/low-density-parity-check-codes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145269.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">154</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">41</span> Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cinna%20Soltanpur">Cinna Soltanpur</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Ghamari"> Mohammad Ghamari</a>, <a href="https://publications.waset.org/abstracts/search?q=Behzad%20Momahed%20Heravi"> Behzad Momahed Heravi</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatemeh%20Zare"> Fatemeh Zare</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=concatenated%20coding" title="concatenated coding">concatenated coding</a>, <a href="https://publications.waset.org/abstracts/search?q=low%E2%80%93density%20parity%E2%80%93check%20codes" title=" low–density parity–check codes"> low–density parity–check codes</a>, <a href="https://publications.waset.org/abstracts/search?q=array%20code" title=" array code"> array code</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20floors" title=" error floors"> error floors</a> </p> <a href="https://publications.waset.org/abstracts/60058/lowering-error-floors-by-concatenation-of-low-density-parity-check-and-array-code" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">40</span> Performance of VSAT MC-CDMA System Using LDPC and Turbo Codes over Multipath Channel</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hassan%20El%20Ghazi">Hassan El Ghazi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20El%20Jourmi"> Mohammed El Jourmi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tayeb%20Sadiki"> Tayeb Sadiki</a>, <a href="https://publications.waset.org/abstracts/search?q=Esmail%20Ahouzi"> Esmail Ahouzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to model and analyze a geostationary satellite communication system based on VSAT network and Multicarrier CDMA system scheme which presents a combination of multicarrier modulation scheme and CDMA concepts. In this study the channel coding strategies (Turbo codes and LDPC codes) are adopted to achieve good performance due to iterative decoding. The envisaged system is examined for a transmission over Multipath channel with use of Ku band in the uplink case. The simulation results are obtained for each different case. The performance of the system is given in terms of Bit Error Rate (BER) and energy per bit to noise power spectral density ratio (Eb/N0). The performance results of designed system shown that the communication system coded with LDPC codes can achieve better error rate performance compared to VSAT MC-CDMA system coded with Turbo codes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=satellite%20communication" title="satellite communication">satellite communication</a>, <a href="https://publications.waset.org/abstracts/search?q=VSAT%20Network" title=" VSAT Network"> VSAT Network</a>, <a href="https://publications.waset.org/abstracts/search?q=MC-CDMA" title=" MC-CDMA"> MC-CDMA</a>, <a href="https://publications.waset.org/abstracts/search?q=LDPC%20codes" title=" LDPC codes"> LDPC codes</a>, <a href="https://publications.waset.org/abstracts/search?q=turbo%20codes" title=" turbo codes"> turbo codes</a>, <a href="https://publications.waset.org/abstracts/search?q=uplink" title=" uplink"> uplink</a> </p> <a href="https://publications.waset.org/abstracts/20041/performance-of-vsat-mc-cdma-system-using-ldpc-and-turbo-codes-over-multipath-channel" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">504</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">39</span> Performance Analysis of IDMA Scheme Using Quasi-Cyclic Low Density Parity Check Codes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anurag%20Saxena">Anurag Saxena</a>, <a href="https://publications.waset.org/abstracts/search?q=Alkesh%20Agrawal"> Alkesh Agrawal</a>, <a href="https://publications.waset.org/abstracts/search?q=Dinesh%20Kumar"> Dinesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The next generation mobile communication systems i.e. fourth generation (4G) was developed to accommodate the quality of service and required data rate. This project focuses on multiple access technique proposed in 4G communication systems. It is attempted to demonstrate the IDMA (Interleave Division Multiple Access) technology. The basic principle of IDMA is that interleaver is different for each user whereas CDMA employs different signatures. IDMA inherits many advantages of CDMA such as robust against fading, easy cell planning; dynamic channel sharing and IDMA increase the spectral efficiency and reduce the receiver complexity. In this, performance of IDMA is analyzed using QC-LDPC coding scheme further it is compared with LDPC coding and at last BER is calculated and plotted in MATLAB. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=4G" title="4G">4G</a>, <a href="https://publications.waset.org/abstracts/search?q=QC-LDPC" title=" QC-LDPC"> QC-LDPC</a>, <a href="https://publications.waset.org/abstracts/search?q=CDMA" title=" CDMA"> CDMA</a>, <a href="https://publications.waset.org/abstracts/search?q=IDMA" title=" IDMA"> IDMA</a> </p> <a href="https://publications.waset.org/abstracts/46251/performance-analysis-of-idma-scheme-using-quasi-cyclic-low-density-parity-check-codes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46251.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">323</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">38</span> Image Captioning with Vision-Language Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Promise%20Ekpo%20Osaine">Promise Ekpo Osaine</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Melesse"> Daniel Melesse</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-modal%20AI%20systems" title="multi-modal AI systems">multi-modal AI systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20captioning" title=" image captioning"> image captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=decoder" title=" decoder"> decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=BLUE%20score" title=" BLUE score"> BLUE score</a> </p> <a href="https://publications.waset.org/abstracts/181849/image-captioning-with-vision-language-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">37</span> A Guide to the Implementation of Ambisonics Super Stereo</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alessio%20Mastrorillo">Alessio Mastrorillo</a>, <a href="https://publications.waset.org/abstracts/search?q=Giuseppe%20Silvi"> Giuseppe Silvi</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesco%20Scagliola"> Francesco Scagliola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we introduce an Ambisonics decoder with an implementation of the C-format, also called Super Stereo. This format is an alternative to conventional stereo and binaural decoding. Unlike those, this format conveys audio information from the horizontal plane and works with stereo speakers and headphones. The two C-format channels can also return a reconstructed planar B-format. This work provides an open-source implementation for this format. We implement an all-pass filter for signal quadrature, as required by the decoding equations. This filter works with six Biquads in a cascade configuration, with values for control frequency and quality factor discovered experimentally. The phase response of the filter delivers a small error in the 20-14.000Hz range. The decoder has been tested with audio sources up to 192kHz sample rate, returning pristine sound quality and detailed stereo image. It has been included in the Envelop for Live suite and is available as an open-source repository. This decoder has applications in Virtual Reality and 360° audio productions, music composition, and online streaming. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ambisonics" title="ambisonics">ambisonics</a>, <a href="https://publications.waset.org/abstracts/search?q=UHJ" title=" UHJ"> UHJ</a>, <a href="https://publications.waset.org/abstracts/search?q=quadrature%20filter" title=" quadrature filter"> quadrature filter</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=Gerzon" title=" Gerzon"> Gerzon</a>, <a href="https://publications.waset.org/abstracts/search?q=decoder" title=" decoder"> decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo" title=" stereo"> stereo</a>, <a href="https://publications.waset.org/abstracts/search?q=binaural" title=" binaural"> binaural</a>, <a href="https://publications.waset.org/abstracts/search?q=biquad" title=" biquad"> biquad</a> </p> <a href="https://publications.waset.org/abstracts/158051/a-guide-to-the-implementation-of-ambisonics-super-stereo" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">36</span> Extending Image Captioning to Video Captioning Using Encoder-Decoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sikiru%20Ademola%20Adewale">Sikiru Ademola Adewale</a>, <a href="https://publications.waset.org/abstracts/search?q=Joe%20Thomas"> Joe Thomas</a>, <a href="https://publications.waset.org/abstracts/search?q=Bolanle%20Hafiz%20Matti"> Bolanle Hafiz Matti</a>, <a href="https://publications.waset.org/abstracts/search?q=Tosin%20Ige"> Tosin Ige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decoder" title="decoder">decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=many-to-many%20mapping" title=" many-to-many mapping"> many-to-many mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20captioning" title=" video captioning"> video captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=2-gram%20BLEU" title=" 2-gram BLEU"> 2-gram BLEU</a> </p> <a href="https://publications.waset.org/abstracts/164540/extending-image-captioning-to-video-captioning-using-encoder-decoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">35</span> End-to-End Spanish-English Sequence Learning Translation Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vidhu%20Mitha%20Goutham">Vidhu Mitha Goutham</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruma%20Mukherjee"> Ruma Mukherjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention" title="attention">attention</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder-decoder" title=" encoder-decoder"> encoder-decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=Fairseq" title=" Fairseq"> Fairseq</a>, <a href="https://publications.waset.org/abstracts/search?q=Seq2Seq" title=" Seq2Seq"> Seq2Seq</a>, <a href="https://publications.waset.org/abstracts/search?q=Spanish" title=" Spanish"> Spanish</a>, <a href="https://publications.waset.org/abstracts/search?q=translation" title=" translation"> translation</a> </p> <a href="https://publications.waset.org/abstracts/132739/end-to-end-spanish-english-sequence-learning-translation-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132739.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> Deep-Learning to Generation of Weights for Image Captioning Using Part-of-Speech Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tiago%20do%20Carmo%20Nogueira">Tiago do Carmo Nogueira</a>, <a href="https://publications.waset.org/abstracts/search?q=C%C3%A1ssio%20Dener%20Noronha%20Vinhal"> Cássio Dener Noronha Vinhal</a>, <a href="https://publications.waset.org/abstracts/search?q=G%C3%A9lson%20da%20Cruz%20J%C3%BAnior"> Gélson da Cruz Júnior</a>, <a href="https://publications.waset.org/abstracts/search?q=Matheus%20Rudolfo%20Diedrich%20Ullmann"> Matheus Rudolfo Diedrich Ullmann</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generating automatic image descriptions through natural language is a challenging task. Image captioning is a task that consistently describes an image by combining computer vision and natural language processing techniques. To accomplish this task, cutting-edge models use encoder-decoder structures. Thus, Convolutional Neural Networks (CNN) are used to extract the characteristics of the images, and Recurrent Neural Networks (RNN) generate the descriptive sentences of the images. However, cutting-edge approaches still suffer from problems of generating incorrect captions and accumulating errors in the decoders. To solve this problem, we propose a model based on the encoder-decoder structure, introducing a module that generates the weights according to the importance of the word to form the sentence, using the part-of-speech (PoS). Thus, the results demonstrate that our model surpasses state-of-the-art models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gated%20recurrent%20units" title="gated recurrent units">gated recurrent units</a>, <a href="https://publications.waset.org/abstracts/search?q=caption%20generation" title=" caption generation"> caption generation</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=part-of-speech" title=" part-of-speech"> part-of-speech</a> </p> <a href="https://publications.waset.org/abstracts/159076/deep-learning-to-generation-of-weights-for-image-captioning-using-part-of-speech-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159076.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Towards Long-Range Pixels Connection for Context-Aware Semantic Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Zubair%20Khan">Muhammad Zubair Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Yugyung%20Lee"> Yugyung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with bi-directional LSTM embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization for reducing internal covariate shift in data distributions. The empirical evidence shows a promising response to our method compared with other semantic segmentation techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=pixels%20connection" title=" pixels connection"> pixels connection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/147965/towards-long-range-pixels-connection-for-context-aware-semantic-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147965.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> Low Light Image Enhancement with Multi-Stage Interconnected Autoencoders Integration in Pix to Pix GAN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Atif">Muhammad Atif</a>, <a href="https://publications.waset.org/abstracts/search?q=Cang%20Yan"> Cang Yan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The enhancement of low-light images is a significant area of study aimed at enhancing the quality of captured images in challenging lighting environments. Recently, methods based on convolutional neural networks (CNN) have gained prominence as they offer state-of-the-art performance. However, many approaches based on CNN rely on increasing the size and complexity of the neural network. In this study, we propose an alternative method for improving low-light images using an autoencoder-based multiscale knowledge transfer model. Our method leverages the power of three autoencoders, where the encoders of the first two autoencoders are directly connected to the decoder of the third autoencoder. Additionally, the decoder of the first two autoencoders is connected to the encoder of the third autoencoder. This architecture enables effective knowledge transfer, allowing the third autoencoder to learn and benefit from the enhanced knowledge extracted by the first two autoencoders. We further integrate the proposed model into the PIX to PIX GAN framework. By integrating our proposed model as the generator in the GAN framework, we aim to produce enhanced images that not only exhibit improved visual quality but also possess a more authentic and realistic appearance. These experimental results, both qualitative and quantitative, show that our method is better than the state-of-the-art methodologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low%20light%20image%20enhancement" title="low light image enhancement">low light image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/180048/low-light-image-enhancement-with-multi-stage-interconnected-autoencoders-integration-in-pix-to-pix-gan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/180048.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jongbae%20Lee">Jongbae Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Seongsoo%20Lee"> Seongsoo Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20sensor%20interface" title="digital sensor interface">digital sensor interface</a>, <a href="https://publications.waset.org/abstracts/search?q=SAE%20J2716" title=" SAE J2716"> SAE J2716</a>, <a href="https://publications.waset.org/abstracts/search?q=SENT" title=" SENT"> SENT</a>, <a href="https://publications.waset.org/abstracts/search?q=verilog%20HDL" title=" verilog HDL"> verilog HDL</a> </p> <a href="https://publications.waset.org/abstracts/94620/design-of-sae-j2716-single-edge-nibble-transmission-digital-sensor-interface-for-automotive-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shweta%20Singh">Shweta Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudaman%20Katti"> Sudaman Katti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=transformers" title=" transformers"> transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=unity" title=" unity"> unity</a> </p> <a href="https://publications.waset.org/abstracts/163301/memory-based-reinforcement-learning-with-transformers-for-long-horizon-timescales-and-continuous-action-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163301.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> Influence of Error Correction Codes on the Quality of Optical Broadband Connections</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mouna%20Hemdi">Mouna Hemdi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jamel%20bel%20Hadj%20Tahar"> Jamel bel Hadj Tahar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The increasing development of multimedia applications requiring the simultaneous transport of several different services contributes to the evolution of the need for very high-speed network. In this paper, we propose an effective solution to achieve the very high speed while retaining elements of the optical transmission channel. So our study focuses on error correcting codes that aim for quality improvement on duty. We present a comparison of the quality of service for single channels and integrating the code BCH, RS and LDPC in order to find the best code in the different conditions of the transmission. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=code%20error%20correction" title="code error correction">code error correction</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20speed%20broadband" title=" high speed broadband"> high speed broadband</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20transmission" title=" optical transmission"> optical transmission</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20systems%20security" title=" information systems security"> information systems security</a> </p> <a href="https://publications.waset.org/abstracts/28490/influence-of-error-correction-codes-on-the-quality-of-optical-broadband-connections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">393</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Capacity Estimation of Hybrid Automated Repeat Request Protocol for Low Earth Orbit Mega-Constellations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arif%20Armagan%20Gozutok">Arif Armagan Gozutok</a>, <a href="https://publications.waset.org/abstracts/search?q=Alper%20Kule"> Alper Kule</a>, <a href="https://publications.waset.org/abstracts/search?q=Burak%20Tos"> Burak Tos</a>, <a href="https://publications.waset.org/abstracts/search?q=Selman%20Demirel"> Selman Demirel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Wireless communication chain requires effective ways to keep throughput efficiency high while it suffers location-dependent, time-varying burst errors. Several techniques are developed in order to assure that the receiver recovers the transmitted information without errors. The most fundamental approaches are error checking and correction besides re-transmission of the non-acknowledged packets. In this paper, stop & wait (SAW) and chase combined (CC) hybrid automated repeat request (HARQ) protocols are compared and analyzed in terms of throughput and average delay for the usage of low earth orbit (LEO) mega-constellations case. Several assumptions and technological implementations are considered as well as usage of low-density parity check (LDPC) codes together with several constellation orbit configurations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HARQ" title="HARQ">HARQ</a>, <a href="https://publications.waset.org/abstracts/search?q=LEO" title=" LEO"> LEO</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20constellation" title=" satellite constellation"> satellite constellation</a>, <a href="https://publications.waset.org/abstracts/search?q=throughput" title=" throughput"> throughput</a> </p> <a href="https://publications.waset.org/abstracts/134154/capacity-estimation-of-hybrid-automated-repeat-request-protocol-for-low-earth-orbit-mega-constellations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134154.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> A CMOS Capacitor Array for ESPAR with Fast Switching Time</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jin-Sup%20Kim">Jin-Sup Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Se-Hwan%20Choi"> Se-Hwan Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jae-Young%20Lee"> Jae-Young Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A 8-bit CMOS capacitor array is designed for using in electrically steerable passive array radiator (ESPAR). The proposed capacitor array shows the fast response time in rising and falling characteristics. Compared to other works in silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technologies, it shows a comparable tuning range and switching time with low power consumption. Using the 0.18um CMOS, the capacitor array features a tuning range of 1.5 to 12.9 pF at 2.4GHz. Including the 2X4 decoder for control interface, the Chip size is 350um X 145um. Current consumption is about 80 nA at 1.8 V operation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CMOS%20capacitor%20array" title="CMOS capacitor array">CMOS capacitor array</a>, <a href="https://publications.waset.org/abstracts/search?q=ESPAR" title=" ESPAR"> ESPAR</a>, <a href="https://publications.waset.org/abstracts/search?q=SOI" title=" SOI"> SOI</a>, <a href="https://publications.waset.org/abstracts/search?q=SOS" title=" SOS"> SOS</a>, <a href="https://publications.waset.org/abstracts/search?q=switching%20time" title=" switching time"> switching time</a> </p> <a href="https://publications.waset.org/abstracts/24058/a-cmos-capacitor-array-for-espar-with-fast-switching-time" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">590</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Implementation of Successive Interference Cancellation Algorithms in the 5g Downlink</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mokrani%20Mohamed%20Amine">Mokrani Mohamed Amine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we have implemented successive interference cancellation algorithms in the 5G downlink. We have calculated the maximum throughput in Frequency Division Duplex (FDD) mode in the downlink, where we have obtained a value equal to 836932 b/ms. The transmitter is of type Multiple Input Multiple Output (MIMO) with eight transmitting and receiving antennas. Each antenna among eight transmits simultaneously a data rate of 104616 b/ms that contains the binary messages of the three users; in this case, the Cyclic Redundancy Check CRC is negligible, and the MIMO category is the spatial diversity. The technology used for this is called Non-Orthogonal Multiple Access (NOMA) with a Quadrature Phase Shift Keying (QPSK) modulation. The transmission is done in a Rayleigh fading channel with the presence of obstacles. The MIMO Successive Interference Cancellation (SIC) receiver with two transmitting and receiving antennas recovers its binary message without errors for certain values of transmission power such as 50 dBm, with 0.054485% errors when the transmitted power is 20dBm and with 0.00286763% errors for a transmitted power of 32 dBm(in the case of user 1) as well as with 0.0114705% errors when the transmitted power is 20 dBm also with 0.00286763% errors for a power of 24 dBm(in the case of user2) by applying the steps involved in SIC. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=5G" title="5G">5G</a>, <a href="https://publications.waset.org/abstracts/search?q=NOMA" title=" NOMA"> NOMA</a>, <a href="https://publications.waset.org/abstracts/search?q=QPSK" title=" QPSK"> QPSK</a>, <a href="https://publications.waset.org/abstracts/search?q=TBS" title=" TBS"> TBS</a>, <a href="https://publications.waset.org/abstracts/search?q=LDPC" title=" LDPC"> LDPC</a>, <a href="https://publications.waset.org/abstracts/search?q=SIC" title=" SIC"> SIC</a>, <a href="https://publications.waset.org/abstracts/search?q=capacity" title=" capacity"> capacity</a> </p> <a href="https://publications.waset.org/abstracts/163944/implementation-of-successive-interference-cancellation-algorithms-in-the-5g-downlink" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163944.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Defect Detection for Nanofibrous Images with Deep Learning-Based Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaokai%20Liu">Gaokai Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic defect detection for nanomaterial images is widely required in industrial scenarios. Deep learning approaches are considered as the most effective solutions for the great majority of image-based tasks. In this paper, an edge guidance network for defect segmentation is proposed. First, the encoder path with multiple convolution and downsampling operations is applied to the acquisition of shared features. Then two decoder paths both are connected to the last convolution layer of the encoder and supervised by the edge and segmentation labels, respectively, to guide the whole training process. Meanwhile, the edge and encoder outputs from the same stage are concatenated to the segmentation corresponding part to further tune the segmentation result. Finally, the effectiveness of the proposed method is verified via the experiments on open nanofibrous datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=defect%20detection" title=" defect detection"> defect detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=nanomaterials" title=" nanomaterials"> nanomaterials</a> </p> <a href="https://publications.waset.org/abstracts/133093/defect-detection-for-nanofibrous-images-with-deep-learning-based-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Optical Multicast over OBS Networks: An Approach Based on Code-Words and Tunable Decoders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maha%20Sliti">Maha Sliti</a>, <a href="https://publications.waset.org/abstracts/search?q=Walid%20Abdallah"> Walid Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Noureddine%20Boudriga"> Noureddine Boudriga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the frame of this work, we present an optical multicasting approach based on optical code-words. Our approach associates, in the edge node, an optical code-word to a group multicast address. In the core node, a set of tunable decoders are used to send a traffic data to multiple destinations based on the received code-word. The use of code-words, which correspond to the combination of an input port and a set of output ports, allows the implementation of an optical switching matrix. At the reception of a burst, it will be delayed in an optical memory. And, the received optical code-word is split to a set of tunable optical decoders. When it matches a configured code-word, the delayed burst is switched to a set of output ports. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20multicast" title="optical multicast">optical multicast</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20burst%20switching%20networks" title=" optical burst switching networks"> optical burst switching networks</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20code-words" title=" optical code-words"> optical code-words</a>, <a href="https://publications.waset.org/abstracts/search?q=tunable%20decoder" title=" tunable decoder"> tunable decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20optical%20memory" title=" virtual optical memory"> virtual optical memory</a> </p> <a href="https://publications.waset.org/abstracts/11614/optical-multicast-over-obs-networks-an-approach-based-on-code-words-and-tunable-decoders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11614.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">607</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Reducing Power Consumption in Network on Chip Using Scramble Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vinayaga%20Jagadessh%20Raja">Vinayaga Jagadessh Raja</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Ganesan"> R. Ganesan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ramesh%20Kumar"> S. Ramesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An ever more significant fraction of the overall power dissipation of a network-on-chip (NoC) based system on- chip (SoC) is due to the interconnection scheme. In information, as equipment shrinks, the power contributes of NoC links starts to compete with that of NoC routers. In this paper, we propose the use of clock gating in the data encoding techniques as a viable way to reduce both power dissipation and time consumption of NoC links. The projected scramble scheme exploits the wormhole switching techniques. That is, flits are scramble by the network interface (NI) before they are injected in the network and are decoded by the target NI. This makes the scheme transparent to the underlying network since the encoder and decoder logic is integrated in the NI and no modification of the routers structural design is required. We review the projected scramble scheme on a set of representative data streams (both synthetic and extracted from real applications) showing that it is possible to reduce the power contribution of both the self-switching activity and the coupling switching activity in inter-routers links. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xilinx%2012.1" title="Xilinx 12.1">Xilinx 12.1</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20consumption" title=" power consumption"> power consumption</a>, <a href="https://publications.waset.org/abstracts/search?q=Encoder" title=" Encoder"> Encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=NOC" title=" NOC"> NOC</a> </p> <a href="https://publications.waset.org/abstracts/32831/reducing-power-consumption-in-network-on-chip-using-scramble-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32831.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">400</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Evaluation in Vitro and in Silico of Pleurotus ostreatus Capacity to Decrease the Amount of Low-Density Polyethylene Microplastics Present in Water Sample from the Middle Basin of the Magdalena River, Colombia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Loren%20S.%20Bernal.">Loren S. Bernal.</a>, <a href="https://publications.waset.org/abstracts/search?q=Catalina%20Castillo"> Catalina Castillo</a>, <a href="https://publications.waset.org/abstracts/search?q=Carel%20E.%20Carvajal"> Carel E. Carvajal</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20F.%20Ibla"> José F. Ibla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Plastic pollution, specifically microplastics, has become a significant issue in aquatic ecosystems worldwide. The large amount of plastic waste carried by water tributaries has resulted in the accumulation of microplastics in water bodies. The polymer aging process caused by environmental influences such as photodegradation and chemical degradation of additives leads to polymer embrittlement and properties change that require degradation or reduction procedures in rivers. However, there is a lack of such procedures for freshwater entities that develop over extended periods. The aim of this study is evaluate the potential of Pleurotus ostreatus a fungus, in reducing lowdensity polyethylene microplastics present in freshwater samples collected from the middle basin of the Magdalena River in Colombia. The study aims to evaluate this process both in vitro and in silico by identifying the growth capacity of Pleurotus ostreatus in the presence of microplastics and identifying the most likely interactions of Pleurotus ostreatus enzymes and their affinity energies. The study follows an engineering development methodology applied on an experimental basis. The in vitro evaluation protocol applied in this study focused on the growth capacity of Pleurotus ostreatus on microplastics using enzymatic inducers. In terms of in silico evaluation, molecular simulations were conducted using the Autodock 1.5.7 program to calculate interaction energies. The molecular dynamics were evaluated by using the myPresto Portal and GROMACS program to calculate radius of gyration and Energies.The results of the study showed that Pleurotus ostreatus has the potential to degrade low-density polyethylene microplastics. The in vitro evaluation revealed the adherence of Pleurotus ostreatus to LDPE using scanning electron microscopy. The best results were obtained with enzymatic inducers as a MnSO4 generating the activation of laccase or manganese peroxidase enzymes in the degradation process. The in silico modelling demonstrated that Pleurotus ostreatus was able to interact with the microplastics present in LDPE, showing affinity energies in molecular docking and molecular dynamics shown a minimum energy and the representative radius of gyration between each enzyme and its substract. The study contributes to the development of bioremediation processes for the removal of microplastics from freshwater sources using the fungus Pleurotus ostreatus. The in silico study provides insights into the affinity energies of Pleurotus ostreatus microplastic degrading enzymes and their interaction with low-density polyethylene. The study demonstrated that Pleurotus ostreatus can interact with LDPE microplastics, making it a good agent for the development of bioremediation processes that aid in the recovery of freshwater sources. The results of the study suggested that bioremediation could be a promising approach to reduce microplastics in freshwater systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bioremediation" title="bioremediation">bioremediation</a>, <a href="https://publications.waset.org/abstracts/search?q=in%20silico%20modelling" title=" in silico modelling"> in silico modelling</a>, <a href="https://publications.waset.org/abstracts/search?q=microplastics" title=" microplastics"> microplastics</a>, <a href="https://publications.waset.org/abstracts/search?q=Pleurotus%20ostreatus" title=" Pleurotus ostreatus"> Pleurotus ostreatus</a> </p> <a href="https://publications.waset.org/abstracts/165402/evaluation-in-vitro-and-in-silico-of-pleurotus-ostreatus-capacity-to-decrease-the-amount-of-low-density-polyethylene-microplastics-present-in-water-sample-from-the-middle-basin-of-the-magdalena-river-colombia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165402.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Drug-Drug Interaction Prediction in Diabetes Mellitus</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashini%20Maduka">Rashini Maduka</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20R.%20Wijesinghe"> C. R. Wijesinghe</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Weerasinghe"> A. R. Weerasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drug-drug interactions (DDIs) can happen when two or more drugs are taken together. Today DDIs have become a serious health issue due to adverse drug effects. In vivo and in vitro methods for identifying DDIs are time-consuming and costly. Therefore, in-silico-based approaches are preferred in DDI identification. Most machine learning models for DDI prediction are used chemical and biological drug properties as features. However, some drug features are not available and costly to extract. Therefore, it is better to make automatic feature engineering. Furthermore, people who have diabetes already suffer from other diseases and take more than one medicine together. Then adverse drug effects may happen to diabetic patients and cause unpleasant reactions in the body. In this study, we present a model with a graph convolutional autoencoder and a graph decoder using a dataset from DrugBank version 5.1.3. The main objective of the model is to identify unknown interactions between antidiabetic drugs and the drugs taken by diabetic patients for other diseases. We considered automatic feature engineering and used Known DDIs only as the input for the model. Our model has achieved 0.86 in AUC and 0.86 in AP. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drug-drug%20interaction%20prediction" title="drug-drug interaction prediction">drug-drug interaction prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20embedding" title=" graph embedding"> graph embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks" title=" graph convolutional networks"> graph convolutional networks</a>, <a href="https://publications.waset.org/abstracts/search?q=adverse%20drug%20effects" title=" adverse drug effects"> adverse drug effects</a> </p> <a href="https://publications.waset.org/abstracts/165305/drug-drug-interaction-prediction-in-diabetes-mellitus" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165305.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Alqahtani">Hamed Alqahtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Manolya%20Kavakli-Thorne"> Manolya Kavakli-Thorne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disentanglement" title="disentanglement">disentanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/108319/adversarial-disentanglement-using-latent-classifier-for-pose-independent-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108319.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Multimodal Direct Neural Network Positron Emission Tomography Reconstruction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=William%20Whiteley">William Whiteley</a>, <a href="https://publications.waset.org/abstracts/search?q=Jens%20Gregor"> Jens Gregor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent developments of direct neural network based positron emission tomography (PET) reconstruction, two prominent architectures have emerged for converting measurement data into images: 1) networks that contain fully-connected layers; and 2) networks that primarily use a convolutional encoder-decoder architecture. In this paper, we present a multi-modal direct PET reconstruction method called MDPET, which is a hybrid approach that combines the advantages of both types of networks. MDPET processes raw data in the form of sinograms and histo-images in concert with attenuation maps to produce high quality multi-slice PET images (e.g., 8x440x440). MDPET is trained on a large whole-body patient data set and evaluated both quantitatively and qualitatively against target images reconstructed with the standard PET reconstruction benchmark of iterative ordered subsets expectation maximization. The results show that MDPET outperforms the best previously published direct neural network methods in measures of bias, signal-to-noise ratio, mean absolute error, and structural similarity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=positron%20emission%20tomography" title=" positron emission tomography"> positron emission tomography</a> </p> <a href="https://publications.waset.org/abstracts/126580/multimodal-direct-neural-network-positron-emission-tomography-reconstruction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">111</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Randomness in Cybertext: A Study on Computer-Generated Poetry from the Perspective of Semiotics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hongliang%20Zhang">Hongliang Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of chance procedures and randomizers in poetry-writing can be traced back to surrealist works, which, by appealing to Sigmund Freud's theories, were still logocentrism. In the 1960s, random permutation and combination were extensively used by the Oulipo, John Cage and Jackson Mac Low, which further deconstructed the metaphysical presence of writing. Today, the randomly-generated digital poetry has emerged as a genre of cybertext which should be co-authored by readers. At the same time, the classical theories have now been updated by cybernetics and media theories. N· Katherine Hayles put forward the concept of ‘the floating signifiers’ by Jacques Lacan to be the ‘the flickering signifiers’ , arguing that the technology per se has become a part of the textual production. This paper makes a historical review of the computer-generated poetry in the perspective of semiotics, emphasizing that the randomly-generated digital poetry which hands over the dual tasks of both interpretation and writing to the readers demonstrates the intervention of media technology in literature. With the participation of computerized algorithm and programming languages, poems randomly generated by computers have not only blurred the boundary between encoder and decoder, but also raises the issue of human-machine. It is also a significant feature of the cybertext that the productive process of the text is full of randomness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cybertext" title="cybertext">cybertext</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20poetry" title=" digital poetry"> digital poetry</a>, <a href="https://publications.waset.org/abstracts/search?q=poetry%20generator" title=" poetry generator"> poetry generator</a>, <a href="https://publications.waset.org/abstracts/search?q=semiotics" title=" semiotics"> semiotics</a> </p> <a href="https://publications.waset.org/abstracts/96100/randomness-in-cybertext-a-study-on-computer-generated-poetry-from-the-perspective-of-semiotics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96100.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lianzhong%20Zhang">Lianzhong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Huang"> Chao Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SAR" title="SAR">SAR</a>, <a href="https://publications.waset.org/abstracts/search?q=sea-land%20segmentation" title=" sea-land segmentation"> sea-land segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a> </p> <a href="https://publications.waset.org/abstracts/148759/sea-land-segmentation-method-based-on-the-transformer-with-enhanced-edge-supervision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">181</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Maximum-likelihood Inference of Multi-Finger Movements Using Neural Activities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyung-Jin%20You">Kyung-Jin You</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiwon%20Rhee"> Kiwon Rhee</a>, <a href="https://publications.waset.org/abstracts/search?q=Marc%20H.%20Schieber"> Marc H. Schieber</a>, <a href="https://publications.waset.org/abstracts/search?q=Nitish%20V.%20Thakor"> Nitish V. Thakor</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun-Chool%20Shin">Hyun-Chool Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It remains unknown whether M1 neurons encode multi-finger movements independently or as a certain neural network of single finger movements although multi-finger movements are physically a combination of single finger movements. We present an evidence of correlation between single and multi-finger movements and also attempt a challenging task of semi-blind decoding of neural data with minimum training of the neural decoder. Data were collected from 115 task-related neurons in M1 of a trained rhesus monkey performing flexion and extension of each finger and the wrist (12 single and 6 two-finger-movements). By exploiting correlation of temporal firing pattern between movements, we found that correlation coefficient for physically related movements pairs is greater than others; neurons tuned to single finger movements increased their firing rate when multi-finger commands were instructed. According to this knowledge, neural semi-blind decoding is done by choosing the greatest and the second greatest likelihood for canonical candidates. We achieved a decoding accuracy about 60% for multiple finger movement without corresponding training data set. this results suggest that only with the neural activities on single finger movements can be exploited to control dexterous multi-fingered neuroprosthetics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=finger%20movement" title="finger movement">finger movement</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20activity" title=" neural activity"> neural activity</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20decoding" title=" blind decoding"> blind decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=M1" title=" M1"> M1</a> </p> <a href="https://publications.waset.org/abstracts/1874/maximum-likelihood-inference-of-multi-finger-movements-using-neural-activities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low-density%20parity-check%20%28LDPC%29%20decoder&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low-density%20parity-check%20%28LDPC%29%20decoder&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>