CINXE.COM
Search results for: decoder
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: decoder</title> <meta name="description" content="Search results for: decoder"> <meta name="keywords" content="decoder"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="decoder" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="decoder"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 36</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: decoder</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">36</span> Image Captioning with Vision-Language Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Promise%20Ekpo%20Osaine">Promise Ekpo Osaine</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Melesse"> Daniel Melesse</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-modal%20AI%20systems" title="multi-modal AI systems">multi-modal AI systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20captioning" title=" image captioning"> image captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=decoder" title=" decoder"> decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=BLUE%20score" title=" BLUE score"> BLUE score</a> </p> <a href="https://publications.waset.org/abstracts/181849/image-captioning-with-vision-language-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">35</span> A Guide to the Implementation of Ambisonics Super Stereo</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alessio%20Mastrorillo">Alessio Mastrorillo</a>, <a href="https://publications.waset.org/abstracts/search?q=Giuseppe%20Silvi"> Giuseppe Silvi</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesco%20Scagliola"> Francesco Scagliola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we introduce an Ambisonics decoder with an implementation of the C-format, also called Super Stereo. This format is an alternative to conventional stereo and binaural decoding. Unlike those, this format conveys audio information from the horizontal plane and works with stereo speakers and headphones. The two C-format channels can also return a reconstructed planar B-format. This work provides an open-source implementation for this format. We implement an all-pass filter for signal quadrature, as required by the decoding equations. This filter works with six Biquads in a cascade configuration, with values for control frequency and quality factor discovered experimentally. The phase response of the filter delivers a small error in the 20-14.000Hz range. The decoder has been tested with audio sources up to 192kHz sample rate, returning pristine sound quality and detailed stereo image. It has been included in the Envelop for Live suite and is available as an open-source repository. This decoder has applications in Virtual Reality and 360° audio productions, music composition, and online streaming. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ambisonics" title="ambisonics">ambisonics</a>, <a href="https://publications.waset.org/abstracts/search?q=UHJ" title=" UHJ"> UHJ</a>, <a href="https://publications.waset.org/abstracts/search?q=quadrature%20filter" title=" quadrature filter"> quadrature filter</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=Gerzon" title=" Gerzon"> Gerzon</a>, <a href="https://publications.waset.org/abstracts/search?q=decoder" title=" decoder"> decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo" title=" stereo"> stereo</a>, <a href="https://publications.waset.org/abstracts/search?q=binaural" title=" binaural"> binaural</a>, <a href="https://publications.waset.org/abstracts/search?q=biquad" title=" biquad"> biquad</a> </p> <a href="https://publications.waset.org/abstracts/158051/a-guide-to-the-implementation-of-ambisonics-super-stereo" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> High Performance Field Programmable Gate Array-Based Stochastic Low-Density Parity-Check Decoder Design for IEEE 802.3an Standard </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghania%20Zerari">Ghania Zerari</a>, <a href="https://publications.waset.org/abstracts/search?q=Abderrezak%20Guessoum"> Abderrezak Guessoum</a>, <a href="https://publications.waset.org/abstracts/search?q=Rachid%20Beguenane"> Rachid Beguenane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces high-performance architecture for fully parallel stochastic Low-Density Parity-Check (LDPC) field programmable gate array (FPGA) based LDPC decoder. The new approach is designed to decrease the decoding latency and to reduce the FPGA logic utilisation. To accomplish the target logic utilisation reduction, the routing of the proposed sub-variable node (VN) internal memory is designed to utilize one slice distributed RAM. Furthermore, a VN initialization, using the channel input probability, is achieved to enhance the decoder convergence, without extra resources and without integrating the output saturated-counters. The Xilinx FPGA implementation, of IEEE 802.3an standard LDPC code, shows that the proposed decoding approach attain high performance along with reduction of FPGA logic utilisation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low-density%20parity-check%20%28LDPC%29%20decoder" title="low-density parity-check (LDPC) decoder">low-density parity-check (LDPC) decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=stochastic%20decoding" title=" stochastic decoding"> stochastic decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=field%20programmable%20gate%20array%20%28FPGA%29" title=" field programmable gate array (FPGA)"> field programmable gate array (FPGA)</a>, <a href="https://publications.waset.org/abstracts/search?q=IEEE%20802.3an%20standard" title=" IEEE 802.3an standard"> IEEE 802.3an standard</a> </p> <a href="https://publications.waset.org/abstracts/81538/high-performance-field-programmable-gate-array-based-stochastic-low-density-parity-check-decoder-design-for-ieee-8023an-standard" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81538.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Extending Image Captioning to Video Captioning Using Encoder-Decoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sikiru%20Ademola%20Adewale">Sikiru Ademola Adewale</a>, <a href="https://publications.waset.org/abstracts/search?q=Joe%20Thomas"> Joe Thomas</a>, <a href="https://publications.waset.org/abstracts/search?q=Bolanle%20Hafiz%20Matti"> Bolanle Hafiz Matti</a>, <a href="https://publications.waset.org/abstracts/search?q=Tosin%20Ige"> Tosin Ige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decoder" title="decoder">decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=many-to-many%20mapping" title=" many-to-many mapping"> many-to-many mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20captioning" title=" video captioning"> video captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=2-gram%20BLEU" title=" 2-gram BLEU"> 2-gram BLEU</a> </p> <a href="https://publications.waset.org/abstracts/164540/extending-image-captioning-to-video-captioning-using-encoder-decoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> Analysis of Joint Source Channel LDPC Coding for Correlated Sources Transmission over Noisy Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marwa%20Ben%20Abdessalem">Marwa Ben Abdessalem</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Zribi"> Amin Zribi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ammar%20Bouall%C3%A8gue"> Ammar Bouallègue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a Joint Source Channel coding scheme based on LDPC codes is investigated. We consider two concatenated LDPC codes, one allows to compress a correlated source and the second to protect it against channel degradations. The original information can be reconstructed at the receiver by a joint decoder, where the source decoder and the channel decoder run in parallel by transferring extrinsic information. We investigate the performance of the JSC LDPC code in terms of Bit-Error Rate (BER) in the case of transmission over an Additive White Gaussian Noise (AWGN) channel, and for different source and channel rate parameters. We emphasize how JSC LDPC presents a performance tradeoff depending on the channel state and on the source correlation. We show that, the JSC LDPC is an efficient solution for a relatively low Signal-to-Noise Ratio (SNR) channel, especially with highly correlated sources. Finally, a source-channel rate optimization has to be applied to guarantee the best JSC LDPC system performance for a given channel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWGN%20channel" title="AWGN channel">AWGN channel</a>, <a href="https://publications.waset.org/abstracts/search?q=belief%20propagation" title=" belief propagation"> belief propagation</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20source%20channel%20coding" title=" joint source channel coding"> joint source channel coding</a>, <a href="https://publications.waset.org/abstracts/search?q=LDPC%20codes" title=" LDPC codes"> LDPC codes</a> </p> <a href="https://publications.waset.org/abstracts/62721/analysis-of-joint-source-channel-ldpc-coding-for-correlated-sources-transmission-over-noisy-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62721.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> End-to-End Spanish-English Sequence Learning Translation Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vidhu%20Mitha%20Goutham">Vidhu Mitha Goutham</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruma%20Mukherjee"> Ruma Mukherjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention" title="attention">attention</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder-decoder" title=" encoder-decoder"> encoder-decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=Fairseq" title=" Fairseq"> Fairseq</a>, <a href="https://publications.waset.org/abstracts/search?q=Seq2Seq" title=" Seq2Seq"> Seq2Seq</a>, <a href="https://publications.waset.org/abstracts/search?q=Spanish" title=" Spanish"> Spanish</a>, <a href="https://publications.waset.org/abstracts/search?q=translation" title=" translation"> translation</a> </p> <a href="https://publications.waset.org/abstracts/132739/end-to-end-spanish-english-sequence-learning-translation-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132739.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> Deep-Learning to Generation of Weights for Image Captioning Using Part-of-Speech Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tiago%20do%20Carmo%20Nogueira">Tiago do Carmo Nogueira</a>, <a href="https://publications.waset.org/abstracts/search?q=C%C3%A1ssio%20Dener%20Noronha%20Vinhal"> Cássio Dener Noronha Vinhal</a>, <a href="https://publications.waset.org/abstracts/search?q=G%C3%A9lson%20da%20Cruz%20J%C3%BAnior"> Gélson da Cruz Júnior</a>, <a href="https://publications.waset.org/abstracts/search?q=Matheus%20Rudolfo%20Diedrich%20Ullmann"> Matheus Rudolfo Diedrich Ullmann</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generating automatic image descriptions through natural language is a challenging task. Image captioning is a task that consistently describes an image by combining computer vision and natural language processing techniques. To accomplish this task, cutting-edge models use encoder-decoder structures. Thus, Convolutional Neural Networks (CNN) are used to extract the characteristics of the images, and Recurrent Neural Networks (RNN) generate the descriptive sentences of the images. However, cutting-edge approaches still suffer from problems of generating incorrect captions and accumulating errors in the decoders. To solve this problem, we propose a model based on the encoder-decoder structure, introducing a module that generates the weights according to the importance of the word to form the sentence, using the part-of-speech (PoS). Thus, the results demonstrate that our model surpasses state-of-the-art models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gated%20recurrent%20units" title="gated recurrent units">gated recurrent units</a>, <a href="https://publications.waset.org/abstracts/search?q=caption%20generation" title=" caption generation"> caption generation</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=part-of-speech" title=" part-of-speech"> part-of-speech</a> </p> <a href="https://publications.waset.org/abstracts/159076/deep-learning-to-generation-of-weights-for-image-captioning-using-part-of-speech-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159076.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> Towards Long-Range Pixels Connection for Context-Aware Semantic Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Zubair%20Khan">Muhammad Zubair Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Yugyung%20Lee"> Yugyung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with bi-directional LSTM embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization for reducing internal covariate shift in data distributions. The empirical evidence shows a promising response to our method compared with other semantic segmentation techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=pixels%20connection" title=" pixels connection"> pixels connection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/147965/towards-long-range-pixels-connection-for-context-aware-semantic-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147965.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Low Light Image Enhancement with Multi-Stage Interconnected Autoencoders Integration in Pix to Pix GAN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Atif">Muhammad Atif</a>, <a href="https://publications.waset.org/abstracts/search?q=Cang%20Yan"> Cang Yan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The enhancement of low-light images is a significant area of study aimed at enhancing the quality of captured images in challenging lighting environments. Recently, methods based on convolutional neural networks (CNN) have gained prominence as they offer state-of-the-art performance. However, many approaches based on CNN rely on increasing the size and complexity of the neural network. In this study, we propose an alternative method for improving low-light images using an autoencoder-based multiscale knowledge transfer model. Our method leverages the power of three autoencoders, where the encoders of the first two autoencoders are directly connected to the decoder of the third autoencoder. Additionally, the decoder of the first two autoencoders is connected to the encoder of the third autoencoder. This architecture enables effective knowledge transfer, allowing the third autoencoder to learn and benefit from the enhanced knowledge extracted by the first two autoencoders. We further integrate the proposed model into the PIX to PIX GAN framework. By integrating our proposed model as the generator in the GAN framework, we aim to produce enhanced images that not only exhibit improved visual quality but also possess a more authentic and realistic appearance. These experimental results, both qualitative and quantitative, show that our method is better than the state-of-the-art methodologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low%20light%20image%20enhancement" title="low light image enhancement">low light image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/180048/low-light-image-enhancement-with-multi-stage-interconnected-autoencoders-integration-in-pix-to-pix-gan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/180048.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jongbae%20Lee">Jongbae Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Seongsoo%20Lee"> Seongsoo Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20sensor%20interface" title="digital sensor interface">digital sensor interface</a>, <a href="https://publications.waset.org/abstracts/search?q=SAE%20J2716" title=" SAE J2716"> SAE J2716</a>, <a href="https://publications.waset.org/abstracts/search?q=SENT" title=" SENT"> SENT</a>, <a href="https://publications.waset.org/abstracts/search?q=verilog%20HDL" title=" verilog HDL"> verilog HDL</a> </p> <a href="https://publications.waset.org/abstracts/94620/design-of-sae-j2716-single-edge-nibble-transmission-digital-sensor-interface-for-automotive-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shweta%20Singh">Shweta Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudaman%20Katti"> Sudaman Katti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=transformers" title=" transformers"> transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=unity" title=" unity"> unity</a> </p> <a href="https://publications.waset.org/abstracts/163301/memory-based-reinforcement-learning-with-transformers-for-long-horizon-timescales-and-continuous-action-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163301.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> A CMOS Capacitor Array for ESPAR with Fast Switching Time</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jin-Sup%20Kim">Jin-Sup Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Se-Hwan%20Choi"> Se-Hwan Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jae-Young%20Lee"> Jae-Young Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A 8-bit CMOS capacitor array is designed for using in electrically steerable passive array radiator (ESPAR). The proposed capacitor array shows the fast response time in rising and falling characteristics. Compared to other works in silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technologies, it shows a comparable tuning range and switching time with low power consumption. Using the 0.18um CMOS, the capacitor array features a tuning range of 1.5 to 12.9 pF at 2.4GHz. Including the 2X4 decoder for control interface, the Chip size is 350um X 145um. Current consumption is about 80 nA at 1.8 V operation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CMOS%20capacitor%20array" title="CMOS capacitor array">CMOS capacitor array</a>, <a href="https://publications.waset.org/abstracts/search?q=ESPAR" title=" ESPAR"> ESPAR</a>, <a href="https://publications.waset.org/abstracts/search?q=SOI" title=" SOI"> SOI</a>, <a href="https://publications.waset.org/abstracts/search?q=SOS" title=" SOS"> SOS</a>, <a href="https://publications.waset.org/abstracts/search?q=switching%20time" title=" switching time"> switching time</a> </p> <a href="https://publications.waset.org/abstracts/24058/a-cmos-capacitor-array-for-espar-with-fast-switching-time" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">589</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Defect Detection for Nanofibrous Images with Deep Learning-Based Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaokai%20Liu">Gaokai Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic defect detection for nanomaterial images is widely required in industrial scenarios. Deep learning approaches are considered as the most effective solutions for the great majority of image-based tasks. In this paper, an edge guidance network for defect segmentation is proposed. First, the encoder path with multiple convolution and downsampling operations is applied to the acquisition of shared features. Then two decoder paths both are connected to the last convolution layer of the encoder and supervised by the edge and segmentation labels, respectively, to guide the whole training process. Meanwhile, the edge and encoder outputs from the same stage are concatenated to the segmentation corresponding part to further tune the segmentation result. Finally, the effectiveness of the proposed method is verified via the experiments on open nanofibrous datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=defect%20detection" title=" defect detection"> defect detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=nanomaterials" title=" nanomaterials"> nanomaterials</a> </p> <a href="https://publications.waset.org/abstracts/133093/defect-detection-for-nanofibrous-images-with-deep-learning-based-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Optical Multicast over OBS Networks: An Approach Based on Code-Words and Tunable Decoders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maha%20Sliti">Maha Sliti</a>, <a href="https://publications.waset.org/abstracts/search?q=Walid%20Abdallah"> Walid Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Noureddine%20Boudriga"> Noureddine Boudriga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the frame of this work, we present an optical multicasting approach based on optical code-words. Our approach associates, in the edge node, an optical code-word to a group multicast address. In the core node, a set of tunable decoders are used to send a traffic data to multiple destinations based on the received code-word. The use of code-words, which correspond to the combination of an input port and a set of output ports, allows the implementation of an optical switching matrix. At the reception of a burst, it will be delayed in an optical memory. And, the received optical code-word is split to a set of tunable optical decoders. When it matches a configured code-word, the delayed burst is switched to a set of output ports. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20multicast" title="optical multicast">optical multicast</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20burst%20switching%20networks" title=" optical burst switching networks"> optical burst switching networks</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20code-words" title=" optical code-words"> optical code-words</a>, <a href="https://publications.waset.org/abstracts/search?q=tunable%20decoder" title=" tunable decoder"> tunable decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20optical%20memory" title=" virtual optical memory"> virtual optical memory</a> </p> <a href="https://publications.waset.org/abstracts/11614/optical-multicast-over-obs-networks-an-approach-based-on-code-words-and-tunable-decoders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11614.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">607</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Reducing Power Consumption in Network on Chip Using Scramble Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vinayaga%20Jagadessh%20Raja">Vinayaga Jagadessh Raja</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Ganesan"> R. Ganesan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ramesh%20Kumar"> S. Ramesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An ever more significant fraction of the overall power dissipation of a network-on-chip (NoC) based system on- chip (SoC) is due to the interconnection scheme. In information, as equipment shrinks, the power contributes of NoC links starts to compete with that of NoC routers. In this paper, we propose the use of clock gating in the data encoding techniques as a viable way to reduce both power dissipation and time consumption of NoC links. The projected scramble scheme exploits the wormhole switching techniques. That is, flits are scramble by the network interface (NI) before they are injected in the network and are decoded by the target NI. This makes the scheme transparent to the underlying network since the encoder and decoder logic is integrated in the NI and no modification of the routers structural design is required. We review the projected scramble scheme on a set of representative data streams (both synthetic and extracted from real applications) showing that it is possible to reduce the power contribution of both the self-switching activity and the coupling switching activity in inter-routers links. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xilinx%2012.1" title="Xilinx 12.1">Xilinx 12.1</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20consumption" title=" power consumption"> power consumption</a>, <a href="https://publications.waset.org/abstracts/search?q=Encoder" title=" Encoder"> Encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=NOC" title=" NOC"> NOC</a> </p> <a href="https://publications.waset.org/abstracts/32831/reducing-power-consumption-in-network-on-chip-using-scramble-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32831.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Drug-Drug Interaction Prediction in Diabetes Mellitus</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashini%20Maduka">Rashini Maduka</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20R.%20Wijesinghe"> C. R. Wijesinghe</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Weerasinghe"> A. R. Weerasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drug-drug interactions (DDIs) can happen when two or more drugs are taken together. Today DDIs have become a serious health issue due to adverse drug effects. In vivo and in vitro methods for identifying DDIs are time-consuming and costly. Therefore, in-silico-based approaches are preferred in DDI identification. Most machine learning models for DDI prediction are used chemical and biological drug properties as features. However, some drug features are not available and costly to extract. Therefore, it is better to make automatic feature engineering. Furthermore, people who have diabetes already suffer from other diseases and take more than one medicine together. Then adverse drug effects may happen to diabetic patients and cause unpleasant reactions in the body. In this study, we present a model with a graph convolutional autoencoder and a graph decoder using a dataset from DrugBank version 5.1.3. The main objective of the model is to identify unknown interactions between antidiabetic drugs and the drugs taken by diabetic patients for other diseases. We considered automatic feature engineering and used Known DDIs only as the input for the model. Our model has achieved 0.86 in AUC and 0.86 in AP. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drug-drug%20interaction%20prediction" title="drug-drug interaction prediction">drug-drug interaction prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20embedding" title=" graph embedding"> graph embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks" title=" graph convolutional networks"> graph convolutional networks</a>, <a href="https://publications.waset.org/abstracts/search?q=adverse%20drug%20effects" title=" adverse drug effects"> adverse drug effects</a> </p> <a href="https://publications.waset.org/abstracts/165305/drug-drug-interaction-prediction-in-diabetes-mellitus" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165305.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Alqahtani">Hamed Alqahtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Manolya%20Kavakli-Thorne"> Manolya Kavakli-Thorne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disentanglement" title="disentanglement">disentanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/108319/adversarial-disentanglement-using-latent-classifier-for-pose-independent-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108319.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Multimodal Direct Neural Network Positron Emission Tomography Reconstruction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=William%20Whiteley">William Whiteley</a>, <a href="https://publications.waset.org/abstracts/search?q=Jens%20Gregor"> Jens Gregor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent developments of direct neural network based positron emission tomography (PET) reconstruction, two prominent architectures have emerged for converting measurement data into images: 1) networks that contain fully-connected layers; and 2) networks that primarily use a convolutional encoder-decoder architecture. In this paper, we present a multi-modal direct PET reconstruction method called MDPET, which is a hybrid approach that combines the advantages of both types of networks. MDPET processes raw data in the form of sinograms and histo-images in concert with attenuation maps to produce high quality multi-slice PET images (e.g., 8x440x440). MDPET is trained on a large whole-body patient data set and evaluated both quantitatively and qualitatively against target images reconstructed with the standard PET reconstruction benchmark of iterative ordered subsets expectation maximization. The results show that MDPET outperforms the best previously published direct neural network methods in measures of bias, signal-to-noise ratio, mean absolute error, and structural similarity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=positron%20emission%20tomography" title=" positron emission tomography"> positron emission tomography</a> </p> <a href="https://publications.waset.org/abstracts/126580/multimodal-direct-neural-network-positron-emission-tomography-reconstruction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">110</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Randomness in Cybertext: A Study on Computer-Generated Poetry from the Perspective of Semiotics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hongliang%20Zhang">Hongliang Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of chance procedures and randomizers in poetry-writing can be traced back to surrealist works, which, by appealing to Sigmund Freud's theories, were still logocentrism. In the 1960s, random permutation and combination were extensively used by the Oulipo, John Cage and Jackson Mac Low, which further deconstructed the metaphysical presence of writing. Today, the randomly-generated digital poetry has emerged as a genre of cybertext which should be co-authored by readers. At the same time, the classical theories have now been updated by cybernetics and media theories. N· Katherine Hayles put forward the concept of ‘the floating signifiers’ by Jacques Lacan to be the ‘the flickering signifiers’ , arguing that the technology per se has become a part of the textual production. This paper makes a historical review of the computer-generated poetry in the perspective of semiotics, emphasizing that the randomly-generated digital poetry which hands over the dual tasks of both interpretation and writing to the readers demonstrates the intervention of media technology in literature. With the participation of computerized algorithm and programming languages, poems randomly generated by computers have not only blurred the boundary between encoder and decoder, but also raises the issue of human-machine. It is also a significant feature of the cybertext that the productive process of the text is full of randomness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cybertext" title="cybertext">cybertext</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20poetry" title=" digital poetry"> digital poetry</a>, <a href="https://publications.waset.org/abstracts/search?q=poetry%20generator" title=" poetry generator"> poetry generator</a>, <a href="https://publications.waset.org/abstracts/search?q=semiotics" title=" semiotics"> semiotics</a> </p> <a href="https://publications.waset.org/abstracts/96100/randomness-in-cybertext-a-study-on-computer-generated-poetry-from-the-perspective-of-semiotics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96100.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lianzhong%20Zhang">Lianzhong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Huang"> Chao Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SAR" title="SAR">SAR</a>, <a href="https://publications.waset.org/abstracts/search?q=sea-land%20segmentation" title=" sea-land segmentation"> sea-land segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a> </p> <a href="https://publications.waset.org/abstracts/148759/sea-land-segmentation-method-based-on-the-transformer-with-enhanced-edge-supervision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">181</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Maximum-likelihood Inference of Multi-Finger Movements Using Neural Activities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyung-Jin%20You">Kyung-Jin You</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiwon%20Rhee"> Kiwon Rhee</a>, <a href="https://publications.waset.org/abstracts/search?q=Marc%20H.%20Schieber"> Marc H. Schieber</a>, <a href="https://publications.waset.org/abstracts/search?q=Nitish%20V.%20Thakor"> Nitish V. Thakor</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun-Chool%20Shin">Hyun-Chool Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It remains unknown whether M1 neurons encode multi-finger movements independently or as a certain neural network of single finger movements although multi-finger movements are physically a combination of single finger movements. We present an evidence of correlation between single and multi-finger movements and also attempt a challenging task of semi-blind decoding of neural data with minimum training of the neural decoder. Data were collected from 115 task-related neurons in M1 of a trained rhesus monkey performing flexion and extension of each finger and the wrist (12 single and 6 two-finger-movements). By exploiting correlation of temporal firing pattern between movements, we found that correlation coefficient for physically related movements pairs is greater than others; neurons tuned to single finger movements increased their firing rate when multi-finger commands were instructed. According to this knowledge, neural semi-blind decoding is done by choosing the greatest and the second greatest likelihood for canonical candidates. We achieved a decoding accuracy about 60% for multiple finger movement without corresponding training data set. this results suggest that only with the neural activities on single finger movements can be exploited to control dexterous multi-fingered neuroprosthetics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=finger%20movement" title="finger movement">finger movement</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20activity" title=" neural activity"> neural activity</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20decoding" title=" blind decoding"> blind decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=M1" title=" M1"> M1</a> </p> <a href="https://publications.waset.org/abstracts/1874/maximum-likelihood-inference-of-multi-finger-movements-using-neural-activities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> An Approach of Node Model TCnNet: Trellis Coded Nanonetworks on Graphene Composite Substrate</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Diogo%20Ferreira%20Lima%20Filho">Diogo Ferreira Lima Filho</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Roberto%20Amazonas"> José Roberto Amazonas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nanotechnology opens the door to new paradigms that introduces a variety of novel tools enabling a plethora of potential applications in the biomedical, industrial, environmental, and military fields. This work proposes an integrated node model by applying the same concepts of TCNet to networks of nanodevices where the nodes are cooperatively interconnected with a low-complexity Mealy Machine (MM) topology integrating in the same electronic system the modules necessary for independent operation in wireless sensor networks (WSNs), consisting of Rectennas (RF to DC power converters), Code Generators based on Finite State Machine (FSM) & Trellis Decoder and On-chip Transmit/Receive with autonomy in terms of energy sources applying the Energy Harvesting technique. This approach considers the use of a Graphene Composite Substrate (GCS) for the integrated electronic circuits meeting the following characteristics: mechanical flexibility, miniaturization, and optical transparency, besides being ecological. In addition, graphene consists of a layer of carbon atoms with the configuration of a honeycomb crystal lattice, which has attracted the attention of the scientific community due to its unique Electrical Characteristics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=composite%20substrate" title="composite substrate">composite substrate</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20harvesting" title=" energy harvesting"> energy harvesting</a>, <a href="https://publications.waset.org/abstracts/search?q=finite%20state%20machine" title=" finite state machine"> finite state machine</a>, <a href="https://publications.waset.org/abstracts/search?q=graphene" title=" graphene"> graphene</a>, <a href="https://publications.waset.org/abstracts/search?q=nanotechnology" title=" nanotechnology"> nanotechnology</a>, <a href="https://publications.waset.org/abstracts/search?q=rectennas" title=" rectennas"> rectennas</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20sensor%20networks" title=" wireless sensor networks"> wireless sensor networks</a> </p> <a href="https://publications.waset.org/abstracts/148901/an-approach-of-node-model-tcnnet-trellis-coded-nanonetworks-on-graphene-composite-substrate" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148901.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kun%20Xu">Kun Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Xu"> Yuan Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jia%20Qiao"> Jia Qiao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=document%20detection" title="document detection">document detection</a>, <a href="https://publications.waset.org/abstracts/search?q=corner%20detection" title=" corner detection"> corner detection</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=lightweight" title=" lightweight"> lightweight</a> </p> <a href="https://publications.waset.org/abstracts/152145/dcdnet-lightweight-document-corner-detection-network-based-on-attention-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152145.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kishore%20Kumar%20R.">Kishore Kumar R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Kaustav%20Sengupta"> Kaustav Sengupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Shalini%20Sood%20Sehgal"> Shalini Sood Sehgal</a>, <a href="https://publications.waset.org/abstracts/search?q=Poornima%20Santhanam"> Poornima Santhanam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colour%20analysis%20in%20t-shirts" title="colour analysis in t-shirts">colour analysis in t-shirts</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder-decoder" title=" encoder-decoder"> encoder-decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means%20clustering" title=" k-means clustering"> k-means clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=U-Net%20model" title=" U-Net model"> U-Net model</a> </p> <a href="https://publications.waset.org/abstracts/151988/deep-vision-a-robust-dominant-colour-extraction-framework-for-t-shirts-based-on-semantic-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151988.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">111</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Linear Decoding Applied to V5/MT Neuronal Activity on Past Trials Predicts Current Sensory Choices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ben%20Hadj%20Hassen%20Sameh">Ben Hadj Hassen Sameh</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaillard%20Corentin"> Gaillard Corentin</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrew%20Parker"> Andrew Parker</a>, <a href="https://publications.waset.org/abstracts/search?q=Kristine%20Krug"> Kristine Krug</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Perceptual decisions about sequences of sensory stimuli often show serial dependence. The behavioural choice on one trial is often affected by the choice on previous trials. We investigated whether the neuronal signals in extrastriate visual area V5/MT on preceding trials might influence choice on the current trial and thereby reveal the neuronal mechanisms of sequential choice effects. We analysed data from 30 single neurons recorded from V5/MT in three Rhesus monkeys making sequential choices about the direction of rotation of a three-dimensional cylinder. We focused exclusively on the responses of neurons that showed significant choice-related firing (mean choice probability =0.73) while the monkey viewed perceptually ambiguous stimuli. Application of a wavelet transform to the choice-related firing revealed differences in the frequency band of neuronal activity that depended on whether the previous trial resulted in a correct choice for an unambiguous stimulus that was in the neuron’s preferred direction (low alpha and high beta and gamma) or non-preferred direction (high alpha and low beta and gamma). To probe this in further detail, we applied a regularized linear decoder to predict the choice for an ambiguous trial by referencing the neuronal activity of the preceding unambiguous trial. Neuronal activity on a previous trial provided a significant prediction of the current choice (61% correc, 95%Cl~52%t), even when limiting analysis to preceding trials that were correct and rewarded. These findings provide a potential neuronal signature of sequential choice effects in the primate visual cortex. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=perception" title="perception">perception</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20making" title=" decision making"> decision making</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=decoding" title=" decoding"> decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20system" title=" visual system"> visual system</a> </p> <a href="https://publications.waset.org/abstracts/154327/linear-decoding-applied-to-v5mt-neuronal-activity-on-past-trials-predicts-current-sensory-choices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154327.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhen%20Cheng">Zhen Cheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinyu%20Dai"> Xinyu Dai</a>, <a href="https://publications.waset.org/abstracts/search?q=Shujian%20Huang"> Shujian Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiajun%20Chen"> Jiajun Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20inference" title="natural language inference">natural language inference</a>, <a href="https://publications.waset.org/abstracts/search?q=explanation%20generation" title=" explanation generation"> explanation generation</a>, <a href="https://publications.waset.org/abstracts/search?q=variational%20auto-encoder" title=" variational auto-encoder"> variational auto-encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20model" title=" generative model"> generative model</a> </p> <a href="https://publications.waset.org/abstracts/126633/variational-explanation-generator-generating-explanation-for-natural-language-inference-using-variational-auto-encoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">151</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Souvik%20Phadikar">Souvik Phadikar</a>, <a href="https://publications.waset.org/abstracts/search?q=Nidul%20Sinha"> Nidul Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajdeep%20Ghosh"> Rajdeep Ghosh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autoencoder" title="autoencoder">autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=brainwave%20signal%20analysis" title=" brainwave signal analysis"> brainwave signal analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=electroencephalogram" title=" electroencephalogram"> electroencephalogram</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a> </p> <a href="https://publications.waset.org/abstracts/118906/selection-of-optimal-reduced-feature-sets-of-brain-signal-analysis-using-heuristically-optimized-deep-autoencoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118906.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sinarwati%20Mohamad%20Suhaili">Sinarwati Mohamad Suhaili</a>, <a href="https://publications.waset.org/abstracts/search?q=Naomie%20Salim"> Naomie Salim</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamad%20Nazim%20Jambli"> Mohamad Nazim Jambli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention%20weight" title="attention weight">attention weight</a>, <a href="https://publications.waset.org/abstracts/search?q=chatbot" title=" chatbot"> chatbot</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder-decoder" title=" encoder-decoder"> encoder-decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention" title=" neural generative attention"> neural generative attention</a>, <a href="https://publications.waset.org/abstracts/search?q=score%20function" title=" score function"> score function</a>, <a href="https://publications.waset.org/abstracts/search?q=sequence-to-sequence" title=" sequence-to-sequence"> sequence-to-sequence</a> </p> <a href="https://publications.waset.org/abstracts/176622/evaluating-generative-neural-attention-weights-based-chatbot-on-customer-support-twitter-dataset" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Analysis and Identification of Trends in Electric Vehicle Crash Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cody%20Stolle">Cody Stolle</a>, <a href="https://publications.waset.org/abstracts/search?q=Mojdeh%20Asadollahipajouh"> Mojdeh Asadollahipajouh</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaleb%20Pafford"> Khaleb Pafford</a>, <a href="https://publications.waset.org/abstracts/search?q=Jada%20Iwuoha"> Jada Iwuoha</a>, <a href="https://publications.waset.org/abstracts/search?q=Samantha%20White"> Samantha White</a>, <a href="https://publications.waset.org/abstracts/search?q=Becky%20Mueller"> Becky Mueller</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Battery-electric vehicles (BEVs) are growing in sales and popularity in the United States as an alternative to traditional internal combustion engine vehicles (ICEVs). BEVs are generally heavier than corresponding models of ICEVs, with large battery packs located beneath the vehicle floorpan, a “skateboard” chassis, and have front and rear crush space available in the trunk and “frunk” or front trunk. The geometrical and frame differences between the vehicles may lead to incompatibilities with gasoline vehicles during vehicle-to-vehicle crashes as well as run-off-road crashes with roadside barriers, which were designed to handle lighter ICEVs with higher centers-of-mass and with dedicated structural chasses. Crash data were collected from 10 states spanning a five-year period between 2017 and 2021. Vehicle Identification Number (VIN) codes were processed with the National Highway Traffic Safety Administration (NHTSA) VIN decoder to extract BEV models from ICEV models. Crashes were filtered to isolate only vehicles produced between 2010 and 2021, and the crash circumstances (weather, time of day, maximum injury) were compared between BEVs and ICEVs. In Washington, 436,613 crashes were identified, which satisfied the selection criteria, and 3,371 of these crashes (0.77%) involved a BEV. The number of crashes which noted a fire were comparable between BEVs and ICEVs of similar model years (0.3% and 0.33%, respectively), and no differences were discernable for the time of day, weather conditions, road geometry, or other prevailing factors (e.g., run-off-road). However, crashes involving BEVs rose rapidly; 31% of all BEV crashes occurred in just 2021. Results indicate that BEVs are performing comparably to ICEVs, and events surrounding BEV crashes are statistically indistinguishable from ICEV crashes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=battery-electric%20vehicles" title="battery-electric vehicles">battery-electric vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=transportation%20safety" title=" transportation safety"> transportation safety</a>, <a href="https://publications.waset.org/abstracts/search?q=infrastructure%20crashworthiness" title=" infrastructure crashworthiness"> infrastructure crashworthiness</a>, <a href="https://publications.waset.org/abstracts/search?q=run-off-road%20crashes" title=" run-off-road crashes"> run-off-road crashes</a>, <a href="https://publications.waset.org/abstracts/search?q=ev%20crash%20data%20analysis" title=" ev crash data analysis"> ev crash data analysis</a> </p> <a href="https://publications.waset.org/abstracts/167829/analysis-and-identification-of-trends-in-electric-vehicle-crash-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167829.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Performance of High Efficiency Video Codec over Wireless Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Ayyub%20Khan">Mohd Ayyub Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadeem%20Akhtar"> Nadeem Akhtar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWGN" title="AWGN">AWGN</a>, <a href="https://publications.waset.org/abstracts/search?q=forward%20error%20correction" title=" forward error correction"> forward error correction</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20coding" title=" video coding"> video coding</a>, <a href="https://publications.waset.org/abstracts/search?q=QAM" title=" QAM"> QAM</a> </p> <a href="https://publications.waset.org/abstracts/92062/performance-of-high-efficiency-video-codec-over-wireless-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92062.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=decoder&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=decoder&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>