CINXE.COM
Search results for: temporal neural sequences
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: temporal neural sequences</title> <meta name="description" content="Search results for: temporal neural sequences"> <meta name="keywords" content="temporal neural sequences"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="temporal neural sequences" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="temporal neural sequences"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3364</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: temporal neural sequences</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3364</span> Dual-Network Memory Model for Temporal Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Motonobu%20Hattori">Motonobu Hattori</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In neural networks, when new patters are learned by a network, they radically interfere with previously stored patterns. This drawback is called catastrophic forgetting. We have already proposed a biologically inspired dual-network memory model which can much reduce this forgetting for static patterns. In this model, information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network using pseudo patterns. Because, temporal sequence learning is more important than static pattern learning in the real world, in this study, we improve our conventional dual-network memory model so that it can deal with temporal sequences without catastrophic forgetting. The computer simulation results show the effectiveness of the proposed dual-network memory model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=catastrophic%20forgetting" title="catastrophic forgetting">catastrophic forgetting</a>, <a href="https://publications.waset.org/abstracts/search?q=dual-network" title=" dual-network"> dual-network</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20sequences" title=" temporal sequences"> temporal sequences</a>, <a href="https://publications.waset.org/abstracts/search?q=hippocampal" title=" hippocampal "> hippocampal </a> </p> <a href="https://publications.waset.org/abstracts/2908/dual-network-memory-model-for-temporal-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">270</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3363</span> Perceptual Organization within Temporal Displacement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michele%20Sinico">Michele Sinico</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The psychological present has an actual extension. When a sequence of instantaneous stimuli falls in this short interval of time, observers perceive a compresence of events in succession and the temporal order depends on the qualitative relationships between the perceptual properties of the events. Two experiments were carried out to study the influence of perceptual grouping, with and without temporal displacement, on the duration of auditory sequences. The psychophysical method of adjustment was adopted. The first experiment investigated the effect of temporal displacement of a white noise on sequence duration. The second experiment investigated the effect of temporal displacement, along the pitch dimension, on temporal shortening of sequence. The results suggest that the temporal order of sounds, in the case of temporal displacement, is organized along the pitch dimension. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=time%20perception" title="time perception">time perception</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20present" title=" perceptual present"> perceptual present</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20displacement" title=" temporal displacement"> temporal displacement</a>, <a href="https://publications.waset.org/abstracts/search?q=Gestalt%20laws%20of%20perceptual%20organization" title=" Gestalt laws of perceptual organization"> Gestalt laws of perceptual organization</a> </p> <a href="https://publications.waset.org/abstracts/76211/perceptual-organization-within-temporal-displacement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76211.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">251</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3362</span> Taxonomic Classification for Living Organisms Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saed%20Khawaldeh">Saed Khawaldeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Elsharnouby"> Mohamed Elsharnouby</a>, <a href="https://publications.waset.org/abstracts/search?q=Alaa%20%20Eddin%20Alchalabi"> Alaa Eddin Alchalabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Usama%20Pervaiz"> Usama Pervaiz</a>, <a href="https://publications.waset.org/abstracts/search?q=Tajwar%20Aleef"> Tajwar Aleef</a>, <a href="https://publications.waset.org/abstracts/search?q=Vu%20Hoang%20Minh"> Vu Hoang Minh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Taxonomic classification has a wide-range of applications such as finding out more about the evolutionary history of organisms that can be done by making a comparison between species living now and species that lived in the past. This comparison can be made using different kinds of extracted species’ data which include DNA sequences. Compared to the estimated number of the organisms that nature harbours, humanity does not have a thorough comprehension of which specific species they all belong to, in spite of the significant development of science and scientific knowledge over many years. One of the methods that can be applied to extract information out of the study of organisms in this regard is to use the DNA sequence of a living organism as a marker, thus making it available to classify it into a taxonomy. The classification of living organisms can be done in many machine learning techniques including Neural Networks (NNs). In this study, DNA sequences classification is performed using Convolutional Neural Networks (CNNs) which is a special type of NNs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20networks" title="deep networks">deep networks</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=taxonomic%20classification" title=" taxonomic classification"> taxonomic classification</a>, <a href="https://publications.waset.org/abstracts/search?q=DNA%20sequences%20classification" title=" DNA sequences classification "> DNA sequences classification </a> </p> <a href="https://publications.waset.org/abstracts/65170/taxonomic-classification-for-living-organisms-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65170.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3361</span> Constructing Orthogonal De Bruijn and Kautz Sequences and Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yaw-Ling%20Lin">Yaw-Ling Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A de Bruijn graph of order k is a graph whose vertices representing all length-k sequences with edges joining pairs of vertices whose sequences have maximum possible overlap (length k−1). Every Hamiltonian cycle of this graph defines a distinct, minimum length de Bruijn sequence containing all k-mers exactly once. A Kautz sequence is the minimal generating sequence so as the sequence of minimal length that produces all possible length-k sequences with the restriction that every two consecutive alphabets in the sequences must be different. A collection of de Bruijn/Kautz sequences are orthogonal if any two sequences are of maximally differ in sequence composition; that is, the maximum length of their common substring is k. In this paper, we discuss how such a collection of (maximal) orthogonal de Bruijn/Kautz sequences can be made and use the algorithm to build up a web application service for the synthesized DNA and other related biomolecular sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biomolecular%20sequence%20synthesis" title="biomolecular sequence synthesis">biomolecular sequence synthesis</a>, <a href="https://publications.waset.org/abstracts/search?q=de%20Bruijn%20sequences" title=" de Bruijn sequences"> de Bruijn sequences</a>, <a href="https://publications.waset.org/abstracts/search?q=Eulerian%20cycle" title=" Eulerian cycle"> Eulerian cycle</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamiltonian%20cycle" title=" Hamiltonian cycle"> Hamiltonian cycle</a>, <a href="https://publications.waset.org/abstracts/search?q=Kautz%20sequences" title=" Kautz sequences"> Kautz sequences</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20sequences" title=" orthogonal sequences"> orthogonal sequences</a> </p> <a href="https://publications.waset.org/abstracts/121912/constructing-orthogonal-de-bruijn-and-kautz-sequences-and-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3360</span> Game Structure and Spatio-Temporal Action Detection in Soccer Using Graphs and 3D Convolutional Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J%C3%A9r%C3%A9mie%20Ochin">Jérémie Ochin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Soccer analytics are built on two data sources: the frame-by-frame position of each player on the terrain and the sequences of events, such as ball drive, pass, cross, shot, throw-in... With more than 2000 ball-events per soccer game, their precise and exhaustive annotation, based on a monocular video stream such as a TV broadcast, remains a tedious and costly manual task. State-of-the-art methods for spatio-temporal action detection from a monocular video stream, often based on 3D convolutional neural networks, are close to reach levels of performances in mean Average Precision (mAP) compatibles with the automation of such task. Nevertheless, to meet their expectation of exhaustiveness in the context of data analytics, such methods must be applied in a regime of high recall – low precision, using low confidence score thresholds. This setting unavoidably leads to the detection of false positives that are the product of the well documented overconfidence behaviour of neural networks and, in this case, their limited access to contextual information and understanding of the game: their predictions are highly unstructured. Based on the assumption that professional soccer players’ behaviour, pose, positions and velocity are highly interrelated and locally driven by the player performing a ball-action, it is hypothesized that the addition of information regarding surrounding player’s appearance, positions and velocity in the prediction methods can improve their metrics. Several methods are compared to build a proper representation of the game surrounding a player, from handcrafted features of the local graph, based on domain knowledge, to the use of Graph Neural Networks trained in an end-to-end fashion with existing state-of-the-art 3D convolutional neural networks. It is shown that the inclusion of information regarding surrounding players helps reaching higher metrics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fine-grained%20action%20recognition" title="fine-grained action recognition">fine-grained action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20action%20recognition" title=" human action recognition"> human action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20neural%20networks" title=" graph neural networks"> graph neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20action%20recognition" title=" spatio-temporal action recognition"> spatio-temporal action recognition</a> </p> <a href="https://publications.waset.org/abstracts/192167/game-structure-and-spatio-temporal-action-detection-in-soccer-using-graphs-and-3d-convolutional-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">24</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3359</span> Chinese Sentence Level Lip Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Peng%20Wang">Peng Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tigang%20Jiang"> Tigang Jiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The computer based lip reading method of different languages cannot be universal. At present, for the research of Chinese lip reading, whether the work on data sets or recognition algorithms, is far from mature. In this paper, we study the Chinese lipreading method based on machine learning, and propose a Chinese Sentence-level lip-reading network (CNLipNet) model which consists of spatio-temporal convolutional neural network(CNN), recurrent neural network(RNN) and Connectionist Temporal Classification (CTC) loss function. This model can map variable-length sequence of video frames to Chinese Pinyin sequence and is trained end-to-end. More over, We create CNLRS, a Chinese Lipreading Dataset, which contains 5948 samples and can be shared through github. The evaluation of CNLipNet on this dataset yielded a 41% word correct rate and a 70.6% character correct rate. This evaluation result is far superior to the professional human lip readers, indicating that CNLipNet performs well in lipreading. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lipreading" title="lipreading">lipreading</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal" title=" spatio-temporal"> spatio-temporal</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20network" title=" recurrent neural network"> recurrent neural network</a> </p> <a href="https://publications.waset.org/abstracts/127254/chinese-sentence-level-lip-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3358</span> Latency-Based Motion Detection in Spiking Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Saleh%20Vahdatpour">Mohammad Saleh Vahdatpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanqing%20Zhang"> Yanqing Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Understanding the neural mechanisms underlying motion detection in the human visual system has long been a fascinating challenge in neuroscience and artificial intelligence. This paper presents a spiking neural network model inspired by the processing of motion information in the primate visual system, particularly focusing on the Middle Temporal (MT) area. In our study, we propose a multi-layer spiking neural network model to perform motion detection tasks, leveraging the idea that synaptic delays in neuronal communication are pivotal in motion perception. Synaptic delay, determined by factors like axon length and myelin insulation, affects the temporal order of input spikes, thereby encoding motion direction and speed. Overall, our spiking neural network model demonstrates the feasibility of capturing motion detection principles observed in the primate visual system. The combination of synaptic delays, learning mechanisms, and shared weights and delays in SMD provides a promising framework for motion perception in artificial systems, with potential applications in computer vision and robotics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title="neural network">neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title=" motion detection"> motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=signature%20detection" title=" signature detection"> signature detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/174855/latency-based-motion-detection-in-spiking-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174855.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3357</span> Leveraging the Power of Dual Spatial-Temporal Data Scheme for Traffic Prediction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Zhou">Yang Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Heli%20Sun"> Heli Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianbin%20Huang"> Jianbin Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jizhong%20Zhao"> Jizhong Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaojie%20Qiao"> Shaojie Qiao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic prediction is a fundamental problem in urban environment, facilitating the smart management of various businesses, such as taxi dispatching, bike relocation, and stampede alert. Most earlier methods rely on identifying the intrinsic spatial-temporal correlation to forecast. However, the complex nature of this problem entails a more sophisticated solution that can simultaneously capture the mutual influence of both adjacent and far-flung areas, with the information of time-dimension also incorporated seamlessly. To tackle this difficulty, we propose a new multi-phase architecture, DSTDS (Dual Spatial-Temporal Data Scheme for traffic prediction), that aims to reveal the underlying relationship that determines future traffic trend. First, a graph-based neural network with an attention mechanism is devised to obtain the static features of the road network. Then, a multi-granularity recurrent neural network is built in conjunction with the knowledge from a grid-based model. Subsequently, the preceding output is fed into a spatial-temporal super-resolution module. With this 3-phase structure, we carry out extensive experiments on several real-world datasets to demonstrate the effectiveness of our approach, which surpasses several state-of-the-art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20prediction" title="traffic prediction">traffic prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial-temporal" title=" spatial-temporal"> spatial-temporal</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20network" title=" recurrent neural network"> recurrent neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=dual%20data%20scheme" title=" dual data scheme"> dual data scheme</a> </p> <a href="https://publications.waset.org/abstracts/150299/leveraging-the-power-of-dual-spatial-temporal-data-scheme-for-traffic-prediction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150299.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3356</span> Temporal Characteristics of Human Perception to Significant Variation of Block Structures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kuo-Cheng%20Liu">Kuo-Cheng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the latest research efforts, the structures of the image in the spatial domain have been successfully analyzed and proved to deduce the visual masking for accurately estimating the visibility thresholds of the image. If the structural properties of the video sequence in the temporal domain are taken into account to estimate the temporal masking, the improvement and enhancement of the as-sessing spatio-temporal visibility thresholds are reasonably expected. In this paper, the temporal characteristics of human perception to the change in block structures on the time axis are analyzed. The temporal characteristics of human perception are represented in terms of the significant variation in block structures for the analysis of human visual system (HVS). Herein, the block structure in each frame is computed by combined the pattern masking and the contrast masking simultaneously. The contrast masking always overestimates the visibility thresholds of edge regions and underestimates that of texture regions, while the pattern masking is weak on a uniform background and is strong on the complex background with spatial patterns. Under considering the significant variation of block structures between successive frames, we extend the block structures of images in the spatial domain to that of video sequences in the temporal domain to analyze the relation between the inter-frame variation of structures and the temporal masking. Meanwhile, the subjective viewing test and the fair rating process are designed to evaluate the consistency of the temporal characteristics with the HVS under a specified viewing condition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20characteristic" title="temporal characteristic">temporal characteristic</a>, <a href="https://publications.waset.org/abstracts/search?q=block%20structure" title=" block structure"> block structure</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20masking" title=" pattern masking"> pattern masking</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20masking" title=" contrast masking"> contrast masking</a> </p> <a href="https://publications.waset.org/abstracts/35248/temporal-characteristics-of-human-perception-to-significant-variation-of-block-structures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35248.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3355</span> Hidden Markov Model for the Simulation Study of Neural States and Intentionality</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20B.%20Mishra">R. B. Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hidden Markov Model (HMM) has been used in prediction and determination of states that generate different neural activations as well as mental working conditions. This paper addresses two applications of HMM; one to determine the optimal sequence of states for two neural states: Active (AC) and Inactive (IA) for the three emission (observations) which are for No Working (NW), Waiting (WT) and Working (W) conditions of human beings. Another is for the determination of optimal sequence of intentionality i.e. Believe (B), Desire (D), and Intention (I) as the states and three observational sequences: NW, WT and W. The computational results are encouraging and useful. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hiden%20markov%20model" title="hiden markov model">hiden markov model</a>, <a href="https://publications.waset.org/abstracts/search?q=believe%20desire%20intention" title=" believe desire intention"> believe desire intention</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20activation" title=" neural activation"> neural activation</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/31030/hidden-markov-model-for-the-simulation-study-of-neural-states-and-intentionality" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31030.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">376</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3354</span> Human Posture Estimation Based on Multiple Viewpoints</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiahe%20Liu">Jiahe Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=HongyangYu"> HongyangYu</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng%20Qian"> Feng Qian</a>, <a href="https://publications.waset.org/abstracts/search?q=Miao%20Luo"> Miao Luo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-view" title="multi-view">multi-view</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=ST-GCN" title=" ST-GCN"> ST-GCN</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20fusion" title=" joint fusion"> joint fusion</a> </p> <a href="https://publications.waset.org/abstracts/173781/human-posture-estimation-based-on-multiple-viewpoints" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173781.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3353</span> Neural Networks Underlying the Generation of Neural Sequences in the HVC</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zeina%20Bou%20Diab">Zeina Bou Diab</a>, <a href="https://publications.waset.org/abstracts/search?q=Arij%20Daou"> Arij Daou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computational%20modeling" title="computational modeling">computational modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences" title=" temporal neural sequences"> temporal neural sequences</a>, <a href="https://publications.waset.org/abstracts/search?q=ionic%20currents" title=" ionic currents"> ionic currents</a>, <a href="https://publications.waset.org/abstracts/search?q=songbird" title=" songbird"> songbird</a> </p> <a href="https://publications.waset.org/abstracts/176727/neural-networks-underlying-the-generation-of-neural-sequences-in-the-hvc" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176727.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3352</span> Acquisition of Anticipatory Coarticulation in Italian-Speaking Children: An Acoustic Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Patrizia%20Bonaventura">Patrizia Bonaventura</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this study is to analyze the influence of prosody on the acquisition of temporal aspects of V-V anticipatory lingual coarticulation in productions by Italian-speaking children. Two twin 7-years old male children, native Italian speakers, interacted with the same adult, repeating nonsense disyllables containing VtV sequences where V1 = {i, a} and V2 = {a,e, i, o,u}, with different stress patterns (e.g. pi’ta, pi’ta). The duration of the VC F2 transitions and the CV/VC F2 transitions durations ratios in different V2 contexts and stress conditions were measured by spectrographic analysis and compared between pronunciations by each child vs. the adult to test whether the child was able to imitate the duration of the transitions as produced by the adult in different stress conditions. Consequences highlighted a significant difference in durations of VC transitions between children and adult: longer VC transitions durations, indicating a greater amount of coarticulation, were found for one child in every context, and for the other, only in stressed [it] sequences. The data support the hypothesis of the presence of different temporal patterns of anticipatory coarticulation in adults and children, and of a greater amount of coarticulation in children, with different strategies of implementation across different prosodic conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20acquisition" title="speech acquisition">speech acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=coarticulation" title=" coarticulation"> coarticulation</a>, <a href="https://publications.waset.org/abstracts/search?q=Italian%20language" title=" Italian language"> Italian language</a>, <a href="https://publications.waset.org/abstracts/search?q=prosody" title=" prosody"> prosody</a> </p> <a href="https://publications.waset.org/abstracts/163132/acquisition-of-anticipatory-coarticulation-in-italian-speaking-children-an-acoustic-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163132.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3351</span> A Comprehensive Analysis of the Phylogenetic Signal in Ramp Sequences in 211 Vertebrates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lauren%20M.%20McKinnon">Lauren M. McKinnon</a>, <a href="https://publications.waset.org/abstracts/search?q=Justin%20B.%20Miller"> Justin B. Miller</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20F.%20Whiting"> Michael F. Whiting</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20S.%20K.%20Kauwe"> John S. K. Kauwe</a>, <a href="https://publications.waset.org/abstracts/search?q=Perry%20G.%20Ridge"> Perry G. Ridge</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Ramp sequences increase translational speed and accuracy when rare, slowly-translated codons are found at the beginnings of genes. Here, the results of the first analysis of ramp sequences in a phylogenetic construct are presented. Methods: Ramp sequences were compared from 211 vertebrates (110 Mammalian and 101 non-mammalian). The presence and absence of ramp sequences were analyzed as a binary character in a parsimony and maximum likelihood framework. Additionally, ramp sequences were mapped to the Open Tree of Life taxonomy to determine the number of parallelisms and reversals that occurred, and these results were compared to what would be expected due to random chance. Lastly, aligned nucleotides in ramp sequences were compared to the rest of the sequence in order to examine possible differences in phylogenetic signal between these regions of the gene. Results: Parsimony and maximum likelihood analyses of the presence/absence of ramp sequences recovered phylogenies that are highly congruent with established phylogenies. Additionally, the retention index of ramp sequences is significantly higher than would be expected due to random chance (p-value = 0). A chi-square analysis of completely orthologous ramp sequences resulted in a p-value of approximately zero as compared to random chance. Discussion: Ramp sequences recover comparable phylogenies as other phylogenomic methods. Although not all ramp sequences appear to have a phylogenetic signal, more ramp sequences track speciation than expected by random chance. Therefore, ramp sequences may be used in conjunction with other phylogenomic approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=codon%20usage%20bias" title="codon usage bias">codon usage bias</a>, <a href="https://publications.waset.org/abstracts/search?q=phylogenetics" title=" phylogenetics"> phylogenetics</a>, <a href="https://publications.waset.org/abstracts/search?q=phylogenomics" title=" phylogenomics"> phylogenomics</a>, <a href="https://publications.waset.org/abstracts/search?q=ramp%20sequence" title=" ramp sequence"> ramp sequence</a> </p> <a href="https://publications.waset.org/abstracts/124024/a-comprehensive-analysis-of-the-phylogenetic-signal-in-ramp-sequences-in-211-vertebrates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124024.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3350</span> Language Processing of Seniors with Alzheimer’s Disease: From the Perspective of Temporal Parameters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lai%20Yi-Hsiu">Lai Yi-Hsiu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present paper aims to examine the language processing of Chinese-speaking seniors with Alzheimer’s disease (AD) from the perspective of temporal cues. Twenty healthy adults, 17 healthy seniors, and 13 seniors with AD in Taiwan participated in this study to tell stories based on two sets of pictures. Nine temporal cues were fetched and analyzed. Oral productions in Mandarin Chinese were compared and discussed to examine to what extent and in what way these three groups of participants performed with significant differences. Results indicated that the age effects were significant in filled pauses. The dementia effects were significant in mean duration of pauses, empty pauses, filled pauses, lexical pauses, normalized mean duration of filled pauses and lexical pauses. The findings reported in the current paper help characterize the nature of language processing in seniors with or without AD, and contribute to the interactions between the AD neural mechanism and their temporal parameters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=language%20processing" title="language processing">language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=Alzheimer%E2%80%99s%20disease" title=" Alzheimer’s disease"> Alzheimer’s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=Mandarin%20Chinese" title=" Mandarin Chinese"> Mandarin Chinese</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20cues" title=" temporal cues"> temporal cues</a> </p> <a href="https://publications.waset.org/abstracts/62548/language-processing-of-seniors-with-alzheimers-disease-from-the-perspective-of-temporal-parameters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3349</span> Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zeina%20Merabi">Zeina Merabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Arij%20Dao"> Arij Dao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=combination%20sensitivity" title="combination sensitivity">combination sensitivity</a>, <a href="https://publications.waset.org/abstracts/search?q=songbirds" title=" songbirds"> songbirds</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiotemporal%20integration" title=" spatiotemporal integration"> spatiotemporal integration</a> </p> <a href="https://publications.waset.org/abstracts/176725/neural-network-mechanisms-underlying-the-combination-sensitivity-property-in-the-hvc-of-songbirds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">65</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3348</span> A Unified Deep Framework for Joint 3d Pose Estimation and Action Recognition from a Single Color Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huy%20Hieu%20Pham">Huy Hieu Pham</a>, <a href="https://publications.waset.org/abstracts/search?q=Houssam%20Salmane"> Houssam Salmane</a>, <a href="https://publications.waset.org/abstracts/search?q=Louahdi%20Khoudour"> Louahdi Khoudour</a>, <a href="https://publications.waset.org/abstracts/search?q=Alain%20Crouzil"> Alain Crouzil</a>, <a href="https://publications.waset.org/abstracts/search?q=Pablo%20Zegers"> Pablo Zegers</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergio%20Velastin"> Sergio Velastin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from color video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important key points of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the Spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, Microsoft Research Redmond (MSR) Action3D, and Stony Brook University (SBU) Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20action%20recognition" title="human action recognition">human action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=D-CNN" title=" D-CNN"> D-CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/115449/a-unified-deep-framework-for-joint-3d-pose-estimation-and-action-recognition-from-a-single-color-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3347</span> Theory of Mind and Its Brain Distribution in Patients with Temporal Lobe Epilepsy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Han%20Wang">Wei-Han Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hsiang-Yu%20Yu"> Hsiang-Yu Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Mau-Sun%20Hua"> Mau-Sun Hua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Theory of Mind (ToM) refers to the ability to infer another’s mental state. With appropriate ToM, one can behave well in social interactions. A growing body of evidence has demonstrated that patients with temporal lobe epilepsy (TLE) may have damaged ToM due to impact on regions of the underlying neural network of ToM. However, the question of whether there is cerebral laterality for ToM functions remains open. This study aimed to examine whether there is cerebral lateralization for ToM abilities in TLE patients. Sixty-seven adult TLE patients and 30 matched healthy controls (HC) were recruited. Patients were classified into right (RTLE), left (LTLE), and bilateral (BTLE) TLE groups on the basis of a consensus panel review of their seizure semiology, EEG findings, and brain imaging results. All participants completed an intellectual test and four tasks measuring basic and advanced ToM. The results showed that, on all ToM tasks; (1)each patient group performed worse than HC; (2)there were no significant differences between LTLE and RTLE groups; (3)the BTLE group performed the worst. It appears that the neural network responsible for ToM is distributed evenly between the cerebral hemispheres. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cerebral%20lateralization" title="cerebral lateralization">cerebral lateralization</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20cognition" title=" social cognition"> social cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20lobe%20epilepsy" title=" temporal lobe epilepsy"> temporal lobe epilepsy</a>, <a href="https://publications.waset.org/abstracts/search?q=theory%20of%20mind" title=" theory of mind"> theory of mind</a> </p> <a href="https://publications.waset.org/abstracts/23843/theory-of-mind-and-its-brain-distribution-in-patients-with-temporal-lobe-epilepsy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23843.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">420</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3346</span> Speech Emotion Recognition with Bi-GRU and Self-Attention based Feature Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bubai%20Maji">Bubai Maji</a>, <a href="https://publications.waset.org/abstracts/search?q=Monorama%20Swain"> Monorama Swain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech is considered an essential and most natural medium for the interaction between machines and humans. However, extracting effective features for speech emotion recognition (SER) is remains challenging. The present studies show that the temporal information captured but high-level temporal-feature learning is yet to be investigated. In this paper, we present an efficient novel method using the Self-attention (SA) mechanism in a combination of Convolutional Neural Network (CNN) and Bi-directional Gated Recurrent Unit (Bi-GRU) network to learn high-level temporal-feature. In order to further enhance the representation of the high-level temporal-feature, we integrate a Bi-GRU output with learnable weights features by SA, and improve the performance. We evaluate our proposed method on our created SITB-OSED and IEMOCAP databases. We report that the experimental results of our proposed method achieve state-of-the-art performance on both databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bi-GRU" title="Bi-GRU">Bi-GRU</a>, <a href="https://publications.waset.org/abstracts/search?q=1D-CNNs" title=" 1D-CNNs"> 1D-CNNs</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a> </p> <a href="https://publications.waset.org/abstracts/148332/speech-emotion-recognition-with-bi-gru-and-self-attention-based-feature-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">113</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3345</span> Modeling and Tracking of Deformable Structures in Medical Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Said%20Ettaieb">Said Ettaieb</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamel%20Hamrouni"> Kamel Hamrouni</a>, <a href="https://publications.waset.org/abstracts/search?q=Su%20Ruan"> Su Ruan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a new method based both on Active Shape Model and a priori knowledge about the spatio-temporal shape variation for tracking deformable structures in medical imaging. The main idea is to exploit the a priori knowledge of shape that exists in ASM and introduce new knowledge about the shape variation over time. The aim is to define a new more stable method, allowing the reliable detection of structures whose shape changes considerably in time. This method can also be used for the three-dimensional segmentation by replacing the temporal component by the third spatial axis (z). The proposed method is applied for the functional and morphological study of the heart pump. The functional aspect was studied through temporal sequences of scintigraphic images and morphology was studied through MRI volumes. The obtained results are encouraging and show the performance of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20shape%20model" title="active shape model">active shape model</a>, <a href="https://publications.waset.org/abstracts/search?q=a%20priori%20knowledge" title=" a priori knowledge"> a priori knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiotemporal%20shape%20variation" title=" spatiotemporal shape variation"> spatiotemporal shape variation</a>, <a href="https://publications.waset.org/abstracts/search?q=deformable%20structures" title=" deformable structures"> deformable structures</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a> </p> <a href="https://publications.waset.org/abstracts/29394/modeling-and-tracking-of-deformable-structures-in-medical-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3344</span> Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mehrdad%20Shafiei%20Dizaji">Mehrdad Shafiei Dizaji</a>, <a href="https://publications.waset.org/abstracts/search?q=Hoda%20Azari"> Hoda Azari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=physics-informed%20neural%20networks" title="physics-informed neural networks">physics-informed neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ground-penetrating%20radar%20%28GPR%29" title=" ground-penetrating radar (GPR)"> ground-penetrating radar (GPR)</a>, <a href="https://publications.waset.org/abstracts/search?q=NDE" title=" NDE"> NDE</a>, <a href="https://publications.waset.org/abstracts/search?q=ConvLSTM" title=" ConvLSTM"> ConvLSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=physics" title=" physics"> physics</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20driven" title=" data driven"> data driven</a> </p> <a href="https://publications.waset.org/abstracts/188443/predicting-subsurface-abnormalities-growth-using-physics-informed-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">40</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3343</span> Enhancing Quality Management Systems through Automated Controls and Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shara%20Toibayeva">Shara Toibayeva</a>, <a href="https://publications.waset.org/abstracts/search?q=Irbulat%20Utepbergenov"> Irbulat Utepbergenov</a>, <a href="https://publications.waset.org/abstracts/search?q=Lyazzat%20Issabekova"> Lyazzat Issabekova</a>, <a href="https://publications.waset.org/abstracts/search?q=Aidana%20Bodesova"> Aidana Bodesova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The article discusses the importance of quality assessment as a strategic tool in business and emphasizes the significance of the effectiveness of quality management systems (QMS) for enterprises. The evaluation of these systems takes into account the specificity of quality indicators, the multilevel nature of the system, and the need for optimal selection of the number of indicators and evaluation of the system state, which is critical for making rational management decisions. Methods and models of automated enterprise quality management are proposed, including an intelligent automated quality management system integrated with the Management Information and Control System. These systems make it possible to automate the implementation and support of QMS, increasing the validity, efficiency, and effectiveness of management decisions by automating the functions performed by decision makers and personnel. The paper also emphasizes the use of recurrent neural networks to improve automated quality management. Recurrent neural networks (RNNs) are used to analyze and process sequences of data, which is particularly useful in the context of document quality assessment and non-conformance detection in quality management systems. These networks are able to account for temporal dependencies and complex relationships between different data elements, which improves the accuracy and efficiency of automated decisions. The project was supported by a grant from the Ministry of Education and Science of the Republic of Kazakhstan under the Zhas Galym project No. AR 13268939, dedicated to research and development of digital technologies to ensure consistency of QMS regulatory documents. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20control%20system" title="automated control system">automated control system</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20management" title=" quality management"> quality management</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20structure" title=" document structure"> document structure</a>, <a href="https://publications.waset.org/abstracts/search?q=formal%20language" title=" formal language"> formal language</a> </p> <a href="https://publications.waset.org/abstracts/188968/enhancing-quality-management-systems-through-automated-controls-and-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">39</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3342</span> Advances on the Understanding of Sequence Convergence Seen from the Perspective of Mathematical Working Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paula%20Verdugo-Hernandez">Paula Verdugo-Hernandez</a>, <a href="https://publications.waset.org/abstracts/search?q=Patricio%20Cumsille"> Patricio Cumsille</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We analyze a first-class on the convergence of real number sequences, named hereafter sequences, to foster exploration and discovery of concepts through graphical representations before engaging students in proving. The main goal was to differentiate between sequences and continuous functions-of-a-real-variable and better understand concepts at an initial stage. We applied the analytic frame of mathematical working spaces, which we expect to contribute to extending to sequences since, as far as we know, it has only developed for other objects, and which is relevant to analyze how mathematical work is built systematically by connecting the epistemological and cognitive perspectives, and involving the semiotic, instrumental, and discursive dimensions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convergence" title="convergence">convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=graphical%20representations" title=" graphical representations"> graphical representations</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20working%20spaces" title=" mathematical working spaces"> mathematical working spaces</a>, <a href="https://publications.waset.org/abstracts/search?q=paradigms%20of%20real%20analysis" title=" paradigms of real analysis"> paradigms of real analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20number%20sequences" title=" real number sequences"> real number sequences</a> </p> <a href="https://publications.waset.org/abstracts/133407/advances-on-the-understanding-of-sequence-convergence-seen-from-the-perspective-of-mathematical-working-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133407.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3341</span> Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yijun%20Shao">Yijun Shao</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Cheng"> Yan Cheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Rashmee%20U.%20Shah"> Rashmee U. Shah</a>, <a href="https://publications.waset.org/abstracts/search?q=Charlene%20R.%20Weir"> Charlene R. Weir</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruce%20E.%20Bray"> Bruce E. Bray</a>, <a href="https://publications.waset.org/abstracts/search?q=Qing%20Zeng-Treitler"> Qing Zeng-Treitler</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title="deep neural network">deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20data" title=" temporal data"> temporal data</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction" title=" prediction"> prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=frailty" title=" frailty"> frailty</a>, <a href="https://publications.waset.org/abstracts/search?q=logistic%20regression%20model" title=" logistic regression model"> logistic regression model</a> </p> <a href="https://publications.waset.org/abstracts/99910/shedding-light-on-the-black-box-explaining-deep-neural-network-prediction-of-clinical-outcome" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99910.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3340</span> Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Arabi">Mohammad Arabi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electric%20motor" title="electric motor">electric motor</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20detection" title=" fault detection"> fault detection</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20features" title=" frequency features"> frequency features</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20features" title=" temporal features"> temporal features</a> </p> <a href="https://publications.waset.org/abstracts/186563/utilizing-temporal-and-frequency-features-in-fault-detection-of-electric-motor-bearings-with-advanced-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186563.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">47</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3339</span> Fractal Behaviour of Earthquake Sequences in Himalaya</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kamal">Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Adil%20Ahmad"> Adil Ahmad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Earthquakes are among the most versatile natural and dynamic processes, and hence a fractal model is considered to be the best representative of the same. We present a novel method to process and analyse information hidden in earthquake sequences using Fractal Dimensions and Iterative Function Systems (IFS). Spatial and temporal variations in the fractal dimensions of seismicity observed around the Indian peninsula in last 30 years are studied. This was used as a possible precursor before large earthquakes in the region. IFS images for observed seismicity in the Himalayan belt were also obtained. We scan the whole data set and coarse grain of a selected window to reduce it to four bins. A critical analysis of four-cornered chaos-game clearly shows that the spatial variation in earthquake occurrences in Himalayan range is not random. Two subzones of Himalaya have a tendency to follow each other in time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=earthquakes" title="earthquakes">earthquakes</a>, <a href="https://publications.waset.org/abstracts/search?q=fractals" title=" fractals"> fractals</a>, <a href="https://publications.waset.org/abstracts/search?q=Himalaya" title=" Himalaya"> Himalaya</a>, <a href="https://publications.waset.org/abstracts/search?q=iterated%20function%20systems" title=" iterated function systems "> iterated function systems </a> </p> <a href="https://publications.waset.org/abstracts/84637/fractal-behaviour-of-earthquake-sequences-in-himalaya" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84637.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3338</span> Maximum-likelihood Inference of Multi-Finger Movements Using Neural Activities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyung-Jin%20You">Kyung-Jin You</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiwon%20Rhee"> Kiwon Rhee</a>, <a href="https://publications.waset.org/abstracts/search?q=Marc%20H.%20Schieber"> Marc H. Schieber</a>, <a href="https://publications.waset.org/abstracts/search?q=Nitish%20V.%20Thakor"> Nitish V. Thakor</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun-Chool%20Shin">Hyun-Chool Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It remains unknown whether M1 neurons encode multi-finger movements independently or as a certain neural network of single finger movements although multi-finger movements are physically a combination of single finger movements. We present an evidence of correlation between single and multi-finger movements and also attempt a challenging task of semi-blind decoding of neural data with minimum training of the neural decoder. Data were collected from 115 task-related neurons in M1 of a trained rhesus monkey performing flexion and extension of each finger and the wrist (12 single and 6 two-finger-movements). By exploiting correlation of temporal firing pattern between movements, we found that correlation coefficient for physically related movements pairs is greater than others; neurons tuned to single finger movements increased their firing rate when multi-finger commands were instructed. According to this knowledge, neural semi-blind decoding is done by choosing the greatest and the second greatest likelihood for canonical candidates. We achieved a decoding accuracy about 60% for multiple finger movement without corresponding training data set. this results suggest that only with the neural activities on single finger movements can be exploited to control dexterous multi-fingered neuroprosthetics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=finger%20movement" title="finger movement">finger movement</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20activity" title=" neural activity"> neural activity</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20decoding" title=" blind decoding"> blind decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=M1" title=" M1"> M1</a> </p> <a href="https://publications.waset.org/abstracts/1874/maximum-likelihood-inference-of-multi-finger-movements-using-neural-activities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">321</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3337</span> Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdesselem%20Dakhli">Abdesselem Dakhli</a>, <a href="https://publications.waset.org/abstracts/search?q=Wajdi%20Bellil"> Wajdi Bellil</a>, <a href="https://publications.waset.org/abstracts/search?q=Chokri%20Ben%20Amar"> Chokri Ben Amar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DNA%20barcode" title="DNA barcode">DNA barcode</a>, <a href="https://publications.waset.org/abstracts/search?q=electron-ion%20interaction%20pseudopotential" title=" electron-ion interaction pseudopotential"> electron-ion interaction pseudopotential</a>, <a href="https://publications.waset.org/abstracts/search?q=Multi%20Library%20Wavelet%20Neural%20Networks%20%28MLWNN%29" title=" Multi Library Wavelet Neural Networks (MLWNN)"> Multi Library Wavelet Neural Networks (MLWNN)</a> </p> <a href="https://publications.waset.org/abstracts/27669/unsupervised-classification-of-dna-barcodes-species-using-multi-library-wavelet-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27669.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3336</span> Algorithm and Software Based on Multilayer Perceptron Neural Networks for Estimating Channel Use in the Spectral Decision Stage in Cognitive Radio Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Danilo%20L%C3%B3pez">Danilo López</a>, <a href="https://publications.waset.org/abstracts/search?q=Johana%20Hern%C3%A1ndez"> Johana Hernández</a>, <a href="https://publications.waset.org/abstracts/search?q=Edwin%20Rivas"> Edwin Rivas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of the Multilayer Perceptron Neural Networks (MLPNN) technique is presented to estimate the future state of use of a licensed channel by primary users (PUs); this will be useful at the spectral decision stage in cognitive radio networks (CRN) to determine approximately in which time instants of future may secondary users (SUs) opportunistically use the spectral bandwidth to send data through the primary wireless network. To validate the results, sequences of occupancy data of channel were generated by simulation. The results show that the prediction percentage is greater than 60% in some of the tests carried out. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20radio" title="cognitive radio">cognitive radio</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction" title=" prediction"> prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=primary%20user" title=" primary user"> primary user</a> </p> <a href="https://publications.waset.org/abstracts/61993/algorithm-and-software-based-on-multilayer-perceptron-neural-networks-for-estimating-channel-use-in-the-spectral-decision-stage-in-cognitive-radio-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61993.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3335</span> Neural Rendering Applied to Confocal Microscopy Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Li">Daniel Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a novel application of neural rendering methods to confocal microscopy. Neural rendering and implicit neural representations have developed at a remarkable pace, and are prevalent in modern 3D computer vision literature. However, they have not yet been applied to optical microscopy, an important imaging field where 3D volume information may be heavily sought after. In this paper, we employ neural rendering on confocal microscopy focus stack data and share the results. We highlight the benefits and potential of adding neural rendering to the toolkit of microscopy image processing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20rendering" title="neural rendering">neural rendering</a>, <a href="https://publications.waset.org/abstracts/search?q=implicit%20neural%20representations" title=" implicit neural representations"> implicit neural representations</a>, <a href="https://publications.waset.org/abstracts/search?q=confocal%20microscopy" title=" confocal microscopy"> confocal microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20processing" title=" medical image processing"> medical image processing</a> </p> <a href="https://publications.waset.org/abstracts/153909/neural-rendering-applied-to-confocal-microscopy-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">658</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=112">112</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=113">113</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=temporal%20neural%20sequences&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>