CINXE.COM

Search results for: deep parsing

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: deep parsing</title> <meta name="description" content="Search results for: deep parsing"> <meta name="keywords" content="deep parsing"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="deep parsing" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="deep parsing"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2107</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: deep parsing</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2017</span> Mediation Role of Teachers’ Surface Acting and Deep Acting on the Relationship between Calling Orientation and Work Engagement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yohannes%20Bisa%20Biramo">Yohannes Bisa Biramo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study examined the meditational role of surface acting and deep acting on the relationship between calling orientation and work engagement of teachers in secondary schools of Wolaita Zone, Wolaita, Ethiopia. A predictive non-experimental correlational design was performed among 300 secondary school teachers. Stratified random sampling followed by a systematic random sampling technique was used as the basis for selecting samples from the target population. To analyze the data, Structural Equation Modeling (SEM) was used to test the association between the independent variables and the dependent variables. Furthermore, the goodness of fit of the study variables was tested using SEM to see and explain the path influence of the independent variable on the dependent variable. Confirmatory factor analysis (CFA) was conducted to test the validity of the scales in the study and to assess the measurement model fit indices. The analysis result revealed that calling was significantly and positively correlated with surface acting, deep acting and work engagement. Similarly, surface acting was significantly and positively correlated with deep acting and work engagement. And also, deep acting was significantly and positively correlated with work engagement. With respect to mediation analysis, the result revealed that surface acting mediated the relationship between calling and work engagement and also deep acting mediated the relationship between calling and work engagement. Besides, by using the model of the present study, the school leaders and practitioners can identify a core area to be considered in recruiting and letting teachers teach, in giving induction training for newly employed teachers and in performance appraisal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=calling" title="calling">calling</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20acting" title=" surface acting"> surface acting</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20acting" title=" deep acting"> deep acting</a>, <a href="https://publications.waset.org/abstracts/search?q=work%20engagement" title=" work engagement"> work engagement</a>, <a href="https://publications.waset.org/abstracts/search?q=mediation" title=" mediation"> mediation</a>, <a href="https://publications.waset.org/abstracts/search?q=teachers" title=" teachers"> teachers</a> </p> <a href="https://publications.waset.org/abstracts/164425/mediation-role-of-teachers-surface-acting-and-deep-acting-on-the-relationship-between-calling-orientation-and-work-engagement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164425.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2016</span> A Comprehensive Study of Camouflaged Object Detection Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalak%20Bin%20Khair">Khalak Bin Khair</a>, <a href="https://publications.waset.org/abstracts/search?q=Saqib%20Jahir"> Saqib Jahir</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ibrahim"> Mohammed Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahad%20Bin"> Fahad Bin</a>, <a href="https://publications.waset.org/abstracts/search?q=Debajyoti%20Karmaker"> Debajyoti Karmaker</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection is a computer technology that deals with searching through digital images and videos for occurrences of semantic elements of a particular class. It is associated with image processing and computer vision. On top of object detection, we detect camouflage objects within an image using Deep Learning techniques. Deep learning may be a subset of machine learning that's essentially a three-layer neural network Over 6500 images that possess camouflage properties are gathered from various internet sources and divided into 4 categories to compare the result. Those images are labeled and then trained and tested using vgg16 architecture on the jupyter notebook using the TensorFlow platform. The architecture is further customized using Transfer Learning. Methods for transferring information from one or more of these source tasks to increase learning in a related target task are created through transfer learning. The purpose of this transfer of learning methodologies is to aid in the evolution of machine learning to the point where it is as efficient as human learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=TensorFlow" title=" TensorFlow"> TensorFlow</a>, <a href="https://publications.waset.org/abstracts/search?q=camouflage" title=" camouflage"> camouflage</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architecture" title=" architecture"> architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a> </p> <a href="https://publications.waset.org/abstracts/152633/a-comprehensive-study-of-camouflaged-object-detection-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2015</span> Optimization of Pressure in Deep Drawing Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ajay%20Kumar%20Choubey">Ajay Kumar Choubey</a>, <a href="https://publications.waset.org/abstracts/search?q=Geeta%20Agnihotri"> Geeta Agnihotri</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Sasikumar"> C. Sasikumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Rashmi%20Dwivedi"> Rashmi Dwivedi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep-drawing operations are performed widely in industrial applications. It is very important for efficiency to achieve parts with no or minimum defects. Deep drawn parts are used in high performance, high strength and high reliability applications where tension, stress, load and human safety are critical considerations. Wrinkling is a kind of defect caused by stresses in the flange part of the blank during metal forming operations. To avoid wrinkling appropriate blank-holder pressure/force or drawbead can be applied. Now-a-day computer simulation plays a vital role in the field of manufacturing process. So computer simulation of manufacturing has much advantage over previous conventional process i.e. mass production, good quality of product, fast working etc. In this study, a two dimensional elasto-plastic Finite Element (F.E.) model for Mild Steel material blank has been developed to study the behavior of the flange wrinkling and deep drawing parameters under different Blank-Holder Pressure (B.H.P.). For this, commercially available Finite Element software ANSYS 14 has been used in this study. Simulation results are critically studied and salient conclusions have been drawn. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ANSYS" title="ANSYS">ANSYS</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20drawing" title=" deep drawing"> deep drawing</a>, <a href="https://publications.waset.org/abstracts/search?q=BHP" title=" BHP"> BHP</a>, <a href="https://publications.waset.org/abstracts/search?q=finite%20element%20simulation" title=" finite element simulation"> finite element simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=wrinkling" title=" wrinkling"> wrinkling</a> </p> <a href="https://publications.waset.org/abstracts/24550/optimization-of-pressure-in-deep-drawing-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24550.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2014</span> Multi-Classification Deep Learning Model for Diagnosing Different Chest Diseases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bandhan%20Dey">Bandhan Dey</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhsina%20Bintoon%20Yiasha"> Muhsina Bintoon Yiasha</a>, <a href="https://publications.waset.org/abstracts/search?q=Gulam%20Sulaman%20Choudhury"> Gulam Sulaman Choudhury</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Chest disease is one of the most problematic ailments in our regular life. There are many known chest diseases out there. Diagnosing them correctly plays a vital role in the process of treatment. There are many methods available explicitly developed for different chest diseases. But the most common approach for diagnosing these diseases is through X-ray. In this paper, we proposed a multi-classification deep learning model for diagnosing COVID-19, lung cancer, pneumonia, tuberculosis, and atelectasis from chest X-rays. In the present work, we used the transfer learning method for better accuracy and fast training phase. The performance of three architectures is considered: InceptionV3, VGG-16, and VGG-19. We evaluated these deep learning architectures using public digital chest x-ray datasets with six classes (i.e., COVID-19, lung cancer, pneumonia, tuberculosis, atelectasis, and normal). The experiments are conducted on six-classification, and we found that VGG16 outperforms other proposed models with an accuracy of 95%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20images" title=" X-ray images"> X-ray images</a>, <a href="https://publications.waset.org/abstracts/search?q=Tensorflow" title=" Tensorflow"> Tensorflow</a>, <a href="https://publications.waset.org/abstracts/search?q=Keras" title=" Keras"> Keras</a>, <a href="https://publications.waset.org/abstracts/search?q=chest%20diseases" title=" chest diseases"> chest diseases</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-classification" title=" multi-classification"> multi-classification</a> </p> <a href="https://publications.waset.org/abstracts/158065/multi-classification-deep-learning-model-for-diagnosing-different-chest-diseases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158065.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2013</span> Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hesheng%20Wang">Hesheng Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Haoyu%20Wang"> Haoyu Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chungang%20Zhuang"> Chungang Zhuang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title="pose estimation">pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20cloud" title=" point cloud"> point cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=bin-picking" title=" bin-picking"> bin-picking</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20computer%20vision" title=" 3D computer vision"> 3D computer vision</a> </p> <a href="https://publications.waset.org/abstracts/132349/deep-learning-based-6d-pose-estimation-for-bin-picking-using-3d-point-clouds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2012</span> Gaits Stability Analysis for a Pneumatic Quadruped Robot Using Reinforcement Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Soofiyan%20Atar">Soofiyan Atar</a>, <a href="https://publications.waset.org/abstracts/search?q=Adil%20Shaikh"> Adil Shaikh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahil%20Rajpurkar"> Sahil Rajpurkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Pragnesh%20Bhalala"> Pragnesh Bhalala</a>, <a href="https://publications.waset.org/abstracts/search?q=Aniket%20Desai"> Aniket Desai</a>, <a href="https://publications.waset.org/abstracts/search?q=Irfan%20Siddavatam"> Irfan Siddavatam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep reinforcement learning (deep RL) algorithms leverage the symbolic power of complex controllers by automating it by mapping sensory inputs to low-level actions. Deep RL eliminates the complex robot dynamics with minimal engineering. Deep RL provides high-risk involvement by directly implementing it in real-world scenarios and also high sensitivity towards hyperparameters. Tuning of hyperparameters on a pneumatic quadruped robot becomes very expensive through trial-and-error learning. This paper presents an automated learning control for a pneumatic quadruped robot using sample efficient deep Q learning, enabling minimal tuning and very few trials to learn the neural network. Long training hours may degrade the pneumatic cylinder due to jerk actions originated through stochastic weights. We applied this method to the pneumatic quadruped robot, which resulted in a hopping gait. In our process, we eliminated the use of a simulator and acquired a stable gait. This approach evolves so that the resultant gait matures more sturdy towards any stochastic changes in the environment. We further show that our algorithm performed very well as compared to programmed gait using robot dynamics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=model-based%20reinforcement%20learning" title="model-based reinforcement learning">model-based reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=gait%20stability" title=" gait stability"> gait stability</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20learning" title=" supervised learning"> supervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pneumatic%20quadruped" title=" pneumatic quadruped"> pneumatic quadruped</a> </p> <a href="https://publications.waset.org/abstracts/140524/gaits-stability-analysis-for-a-pneumatic-quadruped-robot-using-reinforcement-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140524.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2011</span> Deep Learning to Improve the 5G NR Uplink Control Channel</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Krobba">Ahmed Krobba</a>, <a href="https://publications.waset.org/abstracts/search?q=Meriem%20Touzene"> Meriem Touzene</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Debeyche"> Mohamed Debeyche</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The wireless communications system (5G) will provide more diverse applications and higher quality services for users compared to the long-term evolution 4G (LTE). 5G uses a higher carrier frequency, which suffers from information loss in 5G coverage. Most 5G users often cannot obtain high-quality communications due to transmission channel noise and channel complexity. Physical Uplink Control Channel (PUCCH-NR: Physical Uplink Control Channel New Radio) plays a crucial role in 5G NR telecommunication technology, which is mainly used to transmit link control information uplink (UCI: Uplink Control Information. This study based of evaluating the performance of channel physical uplink control PUCCH-NR under low Signal-to-Noise Ratios with various antenna numbers reception. We propose the artificial intelligence approach based on deep neural networks (Deep Learning) to estimate the PUCCH-NR channel in comparison with this approach with different conventional methods such as least-square (LS) and minimum-mean-square-error (MMSE). To evaluate the channel performance we use the block error rate (BLER) as an evaluation criterion of the communication system. The results show that the deep neural networks method gives best performance compared with MMSE and LS <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=5G%20network" title="5G network">5G network</a>, <a href="https://publications.waset.org/abstracts/search?q=uplink%20%28Uplink%29" title=" uplink (Uplink)"> uplink (Uplink)</a>, <a href="https://publications.waset.org/abstracts/search?q=PUCCH%20channel" title=" PUCCH channel"> PUCCH channel</a>, <a href="https://publications.waset.org/abstracts/search?q=NR-PUCCH%20channel" title=" NR-PUCCH channel"> NR-PUCCH channel</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/183158/deep-learning-to-improve-the-5g-nr-uplink-control-channel" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183158.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2010</span> Assessing the Effectiveness of Machine Learning Algorithms for Cyber Threat Intelligence Discovery from the Darknet</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azene%20Zenebe">Azene Zenebe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning is a subset of machine learning which incorporates techniques for the construction of artificial neural networks and found to be useful for modeling complex problems with large dataset. Deep learning requires a very high power computational and longer time for training. By aggregating computing power, high performance computer (HPC) has emerged as an approach to resolving advanced problems and performing data-driven research activities. Cyber threat intelligence (CIT) is actionable information or insight an organization or individual uses to understand the threats that have, will, or are currently targeting the organization. Results of review of literature will be presented along with results of experimental study that compares the performance of tree-based and function-base machine learning including deep learning algorithms using secondary dataset collected from darknet. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title="deep-learning">deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=cyber%20security" title=" cyber security"> cyber security</a>, <a href="https://publications.waset.org/abstracts/search?q=cyber%20threat%20modeling" title=" cyber threat modeling"> cyber threat modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=tree-based%20machine%20learning" title=" tree-based machine learning"> tree-based machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=function-based%20machine%20learning" title=" function-based machine learning"> function-based machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20science" title=" data science"> data science</a> </p> <a href="https://publications.waset.org/abstracts/148566/assessing-the-effectiveness-of-machine-learning-algorithms-for-cyber-threat-intelligence-discovery-from-the-darknet" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2009</span> High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amal%20Khalifa">Amal Khalifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicolas%20Vana%20Santos"> Nicolas Vana Santos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=steganography" title=" steganography"> steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/170293/high-capacity-image-steganography-using-wavelet-based-fusion-on-deep-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2008</span> A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Zarabian">Maryam Zarabian</a>, <a href="https://publications.waset.org/abstracts/search?q=Hector%20Guzman"> Hector Guzman</a>, <a href="https://publications.waset.org/abstracts/search?q=Pedro%20Pereira-Almao"> Pedro Pereira-Almao</a>, <a href="https://publications.waset.org/abstracts/search?q=Abraham%20Fapojuwo"> Abraham Fapojuwo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=dry%20reforming%20of%20methane" title=" dry reforming of methane"> dry reforming of methane</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=greedy%20layer-wise%20pretraining" title=" greedy layer-wise pretraining"> greedy layer-wise pretraining</a> </p> <a href="https://publications.waset.org/abstracts/163075/a-deep-learning-model-with-greedy-layer-wise-pretraining-approach-for-optimal-syngas-production-by-dry-reforming-of-methane" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163075.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">86</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2007</span> Hydrogeochemical Characteristics of the Different Aquiferous Layers in Oban Basement Complex Area (SE Nigeria)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azubuike%20Ekwere">Azubuike Ekwere</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The shallow and deep aquiferous horizons of the fractured and weathered crystalline basement Oban Massif of south-eastern Nigeria were studied during the dry and wet seasons. The criteria were ascertaining hydrochemistry relative to seasonal and spatial variations across the study area. Results indicate that concentrations of major cations and anions exhibit the order of abundance; Ca>Na>Mg>K and HCO3>SO4>Cl respectively, with minor variations across sampling seasons. Major elements Ca, Mg, Na and K were higher for the shallow aquifers than the deep aquifers across seasons. The major anions Cl, SO4, HCO3, and NO3 were higher for the deep aquifers compared to the shallow ones. Two water types were identified for both aquifer types: Ca-Mg-HCO3 and Ca-Na-Cl-SO4. Most of the parameters considered were within the international limits for drinking, domestic and irrigation purposes. Assessment by use of sodium absorption ratio (SAR), percent sodium (%Na) and the wilcox diagram reveals that the waters are suitable for irrigation purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=shallow%20aquifer" title="shallow aquifer">shallow aquifer</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20aquifer" title=" deep aquifer"> deep aquifer</a>, <a href="https://publications.waset.org/abstracts/search?q=seasonal%20variation" title=" seasonal variation"> seasonal variation</a>, <a href="https://publications.waset.org/abstracts/search?q=hydrochemistry" title=" hydrochemistry"> hydrochemistry</a>, <a href="https://publications.waset.org/abstracts/search?q=Oban%20massif" title=" Oban massif"> Oban massif</a>, <a href="https://publications.waset.org/abstracts/search?q=Nigeria" title=" Nigeria"> Nigeria</a> </p> <a href="https://publications.waset.org/abstracts/2099/hydrogeochemical-characteristics-of-the-different-aquiferous-layers-in-oban-basement-complex-area-se-nigeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2099.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">662</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2006</span> Restoring Sagging Neck with Minimal Scar Face Lifting</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Marano">Alessandro Marano</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The author describes the use of deep plane face lifting and platysmaplasty to treat sagging neck with minimal scars. Series of case study. The author uses a selective deep plane face lift with a minimal access scar that not extend behind the ear lobe, neck liposuction and platysmaplasty to restore the sagging neck; the scars are minimal and no require drainage post-op. The deep plane face lifting can achieve a good result restoring vertical vectors in aging and sagging face, neck district can be treated without cutting the skin behind the ear lobe combining the SMAS vertical suspension and platysmaplasty; surgery can be performed in local anesthesia with sedation in day surgery and fast recovery. Restoring neck sagging without extend scars behind ear lobe is possible in selected patients, procedure is fast, safe, no drainage required, patients are satisfied and healing time is fast and comfortable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20lifting" title="face lifting">face lifting</a>, <a href="https://publications.waset.org/abstracts/search?q=aesthetic" title=" aesthetic"> aesthetic</a>, <a href="https://publications.waset.org/abstracts/search?q=face" title=" face"> face</a>, <a href="https://publications.waset.org/abstracts/search?q=neck" title=" neck"> neck</a>, <a href="https://publications.waset.org/abstracts/search?q=platysmaplasty" title=" platysmaplasty"> platysmaplasty</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20plane" title=" deep plane"> deep plane</a> </p> <a href="https://publications.waset.org/abstracts/149687/restoring-sagging-neck-with-minimal-scar-face-lifting" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149687.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2005</span> Monitoring the Effect of Deep Frying and the Type of Food on the Quality of Oil</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omar%20Masaud%20Almrhag">Omar Masaud Almrhag</a>, <a href="https://publications.waset.org/abstracts/search?q=Frage%20Lhadi%20Abookleesh"> Frage Lhadi Abookleesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Different types of food like banana, potato and chicken affect the quality of oil during deep fat frying. The changes in the quality of oil were evaluated and compared. Four different types of edible oils, namely, corn oil, soybean, canola, and palm oil were used for deep fat frying at 180°C ± 5°C for 5 h/d for six consecutive days. A potato was sliced into 7-8 cm length wedges and chicken was cut into uniform pieces of 100 g each. The parameters used to assess the quality of oil were total polar compound (TPC), iodine value (IV), specific extinction E1% at 233 nm and 269 nm, fatty acid composition (FAC), free fatty acids (FFA), viscosity (cp) and changes in the thermal properties. Results showed that, TPC, IV, FAC, Viscosity (cp) and FFA composition changed significantly with time (P< 0.05) and type of food. Significant differences (P< 0.05) were noted for the used parameters during frying of the above mentioned three products. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=frying%20potato" title="frying potato">frying potato</a>, <a href="https://publications.waset.org/abstracts/search?q=chicken" title=" chicken"> chicken</a>, <a href="https://publications.waset.org/abstracts/search?q=frying%20deterioration" title=" frying deterioration"> frying deterioration</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20of%20oil" title=" quality of oil "> quality of oil </a> </p> <a href="https://publications.waset.org/abstracts/11028/monitoring-the-effect-of-deep-frying-and-the-type-of-food-on-the-quality-of-oil" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11028.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">420</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2004</span> A Convolutional Deep Neural Network Approach for Skin Cancer Detection Using Skin Lesion Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Firas%20Gerges">Firas Gerges</a>, <a href="https://publications.waset.org/abstracts/search?q=Frank%20Y.%20Shih"> Frank Y. Shih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malignant melanoma, known simply as melanoma, is a type of skin cancer that appears as a mole on the skin. It is critical to detect this cancer at an early stage because it can spread across the body and may lead to the patient's death. When detected early, melanoma is curable. In this paper, we propose a deep learning model (convolutional neural networks) in order to automatically classify skin lesion images as malignant or benign. Images underwent certain pre-processing steps to diminish the effect of the normal skin region on the model. The result of the proposed model showed a significant improvement over previous work, achieving an accuracy of 97%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20cancer" title=" skin cancer"> skin cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=melanoma" title=" melanoma"> melanoma</a> </p> <a href="https://publications.waset.org/abstracts/134720/a-convolutional-deep-neural-network-approach-for-skin-cancer-detection-using-skin-lesion-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134720.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2003</span> AI Peer Review Challenge: Standard Model of Physics vs 4D GEM EOS</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20A.%20Harness">David A. Harness</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Natural evolution of ATP cognitive systems is to meet AI peer review standards. ATP process of axiom selection from Mizar to prove a conjecture would be further refined, as in all human and machine learning, by solving the real world problem of the proposed AI peer review challenge: Determine which conjecture forms the higher confidence level constructive proof between Standard Model of Physics SU(n) lattice gauge group operation vs. present non-standard 4D GEM EOS SU(n) lattice gauge group spatially extended operation in which the photon and electron are the first two trace angular momentum invariants of a gravitoelectromagnetic (GEM) energy momentum density tensor wavetrain integration spin-stress pressure-volume equation of state (EOS), initiated via 32 lines of Mathematica code. Resulting gravitoelectromagnetic spectrum ranges from compressive through rarefactive of the central cosmological constant vacuum energy density in units of pascals. Said self-adjoint group operation exclusively operates on the stress energy momentum tensor of the Einstein field equations, introducing quantization directly on the 4D spacetime level, essentially reformulating the Yang-Mills virtual superpositioned particle compounded lattice gauge groups quantization of the vacuum—into a single hyper-complex multi-valued GEM U(1) × SU(1,3) lattice gauge group Planck spacetime mesh quantization of the vacuum. Thus the Mizar corpus already contains all of the axioms required for relevant DeepMath premise selection and unambiguous formal natural language parsing in context deep learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20theorem%20proving" title="automated theorem proving">automated theorem proving</a>, <a href="https://publications.waset.org/abstracts/search?q=constructive%20quantum%20field%20theory" title=" constructive quantum field theory"> constructive quantum field theory</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20theory" title=" information theory"> information theory</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/74654/ai-peer-review-challenge-standard-model-of-physics-vs-4d-gem-eos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2002</span> COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Uma%20Maheshwari%20V.">Uma Maheshwari V.</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajanikanth%20Aluvalu"> Rajanikanth Aluvalu</a>, <a href="https://publications.waset.org/abstracts/search?q=Kumar%20Gautam"> Kumar Gautam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20CNN" title="deep CNN">deep CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID%E2%80%9319%20analysis" title=" COVID–19 analysis"> COVID–19 analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20map" title=" feature map"> feature map</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a> </p> <a href="https://publications.waset.org/abstracts/162054/covid-19-analysis-with-deep-learning-model-using-chest-x-rays-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162054.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2001</span> Automatic Intelligent Analysis of Malware Behaviour</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hermann%20Dornhackl">Hermann Dornhackl</a>, <a href="https://publications.waset.org/abstracts/search?q=Konstantin%20Kadletz"> Konstantin Kadletz</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20Luh"> Robert Luh</a>, <a href="https://publications.waset.org/abstracts/search?q=Paul%20Tavolato"> Paul Tavolato</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we describe the use of formal methods to model malware behaviour. The modelling of harmful behaviour rests upon syntactic structures that represent malicious procedures inside malware. The malicious activities are modelled by a formal grammar, where API calls’ components are the terminals and the set of API calls used in combination to achieve a goal are designated non-terminals. The combination of different non-terminals in various ways and tiers make up the attack vectors that are used by harmful software. Based on these syntactic structures a parser can be generated which takes execution traces as input for pattern recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=malware%20behaviour" title="malware behaviour">malware behaviour</a>, <a href="https://publications.waset.org/abstracts/search?q=modelling" title=" modelling"> modelling</a>, <a href="https://publications.waset.org/abstracts/search?q=parsing" title=" parsing"> parsing</a>, <a href="https://publications.waset.org/abstracts/search?q=search" title=" search"> search</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20matching" title=" pattern matching"> pattern matching</a> </p> <a href="https://publications.waset.org/abstracts/3774/automatic-intelligent-analysis-of-malware-behaviour" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3774.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">332</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2000</span> Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ammarah%20Irum">Ammarah Irum</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Ali%20Tahir"> Muhammad Ali Tahir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urdu%20sentiment%20analysis" title="urdu sentiment analysis">urdu sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=opinion%20mining" title=" opinion mining"> opinion mining</a>, <a href="https://publications.waset.org/abstracts/search?q=low-resource%20language" title=" low-resource language"> low-resource language</a> </p> <a href="https://publications.waset.org/abstracts/172973/document-level-sentiment-analysis-an-exploratory-case-study-of-low-resource-language-urdu" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172973.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1999</span> Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arian%20Hosseini">Arian Hosseini</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmudul%20Hasan"> Mahmudul Hasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20classification" title="deep classification">deep classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20moderation" title=" content moderation"> content moderation</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=explosion%20detection" title=" explosion detection"> explosion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/183644/faster-lighter-more-accurate-a-deep-learning-ensemble-for-content-moderation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1998</span> Malaria Parasite Detection Using Deep Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaustubh%20Chakradeo">Kaustubh Chakradeo</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Delves"> Michael Delves</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofya%20Titarenko"> Sofya Titarenko</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malaria is a serious disease which affects hundreds of millions of people around the world, each year. If not treated in time, it can be fatal. Despite recent developments in malaria diagnostics, the microscopy method to detect malaria remains the most common. Unfortunately, the accuracy of microscopic diagnostics is dependent on the skill of the microscopist and limits the throughput of malaria diagnosis. With the development of Artificial Intelligence tools and Deep Learning techniques in particular, it is possible to lower the cost, while achieving an overall higher accuracy. In this paper, we present a VGG-based model and compare it with previously developed models for identifying infected cells. Our model surpasses most previously developed models in a range of the accuracy metrics. The model has an advantage of being constructed from a relatively small number of layers. This reduces the computer resources and computational time. Moreover, we test our model on two types of datasets and argue that the currently developed deep-learning-based methods cannot efficiently distinguish between infected and contaminated cells. A more precise study of suspicious regions is required. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title="convolution neural network">convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria" title=" malaria"> malaria</a>, <a href="https://publications.waset.org/abstracts/search?q=thin%20blood%20smears" title=" thin blood smears"> thin blood smears</a> </p> <a href="https://publications.waset.org/abstracts/131600/malaria-parasite-detection-using-deep-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131600.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1997</span> Prediction on Housing Price Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li%20Yu">Li Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chenlu%20Jiao"> Chenlu Jiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongrun%20Xin"> Hongrun Xin</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kaiyang%20Wang"> Kaiyang Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=housing%20prediction" title=" housing prediction"> housing prediction</a> </p> <a href="https://publications.waset.org/abstracts/84747/prediction-on-housing-price-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84747.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1996</span> An Eco-Friendly Preparations of Izonicotinamide Quaternary Salts in Deep Eutectic Solvents </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dajana%20Ga%C5%A1o-Soka%C4%8D">Dajana Gašo-Sokač</a>, <a href="https://publications.waset.org/abstracts/search?q=Valentina%20Bu%C5%A1i%C4%87"> Valentina Bušić</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep eutectic solvents (DES) are liquids composed of two or three safe, inexpensive components, often interconnected by noncovalent hydrogen bonds which produce eutectic mixture whose melting point is lower than that of each component. No data in literature have been found on the quaternization reaction in DES. The use of DES have several advantages: they are environmentally benign and biodegradable, easy for purification and simple for preparation. An environmentally sustainable method for preparing quaternary salts of izonicotinamide and substituted 2-bromoacetophenones was demonstrated here using choline chloride-based DES. The quaternization reaction was carried out by three synthetic approaches: conventional method, microwave and ultrasonic irradiation. We showed that the highest yields were obtained by the microwave method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20eutectic%20solvents" title="deep eutectic solvents">deep eutectic solvents</a>, <a href="https://publications.waset.org/abstracts/search?q=izonicotinamide%20salts" title=" izonicotinamide salts"> izonicotinamide salts</a>, <a href="https://publications.waset.org/abstracts/search?q=microwave%20synthesis" title=" microwave synthesis"> microwave synthesis</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasonic%20irradiation" title=" ultrasonic irradiation"> ultrasonic irradiation</a> </p> <a href="https://publications.waset.org/abstracts/118856/an-eco-friendly-preparations-of-izonicotinamide-quaternary-salts-in-deep-eutectic-solvents" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118856.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1995</span> Studies of Zooplankton in Gdańsk Basin (2010-2011)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lidia%20Dzierzbicka-Glowacka">Lidia Dzierzbicka-Glowacka</a>, <a href="https://publications.waset.org/abstracts/search?q=Anna%20Lemieszek"> Anna Lemieszek</a>, <a href="https://publications.waset.org/abstracts/search?q=Mariusz%20Figiela"> Mariusz Figiela</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In 2010-2011, the research on zooplankton was conducted in the southern part of the Baltic Sea to determine seasonal variability in changes occurring throughout the zooplankton in 2010 and 2011, both in the region of Gdańsk Deep, and in the western part of Gdańsk Bay. The research in the sea showed that the taxonomic composition of holoplankton in the southern part of the Baltic Sea was similar to that recorded in this region for many years. The maximum values of abundance and biomass of zooplankton both in the Deep and the Bay of Gdańsk were observed in the summer season. Copepoda dominated in the composition of zooplankton for almost the entire study period, while rotifers occurred in larger numbers only in the summer 2010 in the Gdańsk Deep as well as in May and July 2010 in the western part of Gdańsk Bay, and meroplankton – in April 2011. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Baltic%20Sea" title="Baltic Sea">Baltic Sea</a>, <a href="https://publications.waset.org/abstracts/search?q=composition" title=" composition"> composition</a>, <a href="https://publications.waset.org/abstracts/search?q=Gda%C5%84sk%20Bay" title=" Gdańsk Bay"> Gdańsk Bay</a>, <a href="https://publications.waset.org/abstracts/search?q=zooplankton" title=" zooplankton"> zooplankton</a> </p> <a href="https://publications.waset.org/abstracts/25566/studies-of-zooplankton-in-gdansk-basin-2010-2011" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1994</span> Enhanced Image Representation for Deep Belief Network Classification of Hyperspectral Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khitem%20Amiri">Khitem Amiri</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Farah"> Mohamed Farah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification is a challenging task and is gaining lots of interest since it helps us to understand the content of images. Recently Deep Learning (DL) based methods gave very interesting results on several benchmarks. For Hyperspectral images (HSI), the application of DL techniques is still challenging due to the scarcity of labeled data and to the curse of dimensionality. Among other approaches, Deep Belief Network (DBN) based approaches gave a fair classification accuracy. In this paper, we address the problem of the curse of dimensionality by reducing the number of bands and replacing the HSI channels by the channels representing radiometric indices. Therefore, instead of using all the HSI bands, we compute the radiometric indices such as NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), etc, and we use the combination of these indices as input for the Deep Belief Network (DBN) based classification model. Thus, we keep almost all the pertinent spectral information while reducing considerably the size of the image. In order to test our image representation, we applied our method on several HSI datasets including the Indian pines dataset, Jasper Ridge data and it gave comparable results to the state of the art methods while reducing considerably the time of training and testing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20images" title="hyperspectral images">hyperspectral images</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20belief%20network" title=" deep belief network"> deep belief network</a>, <a href="https://publications.waset.org/abstracts/search?q=radiometric%20indices" title=" radiometric indices"> radiometric indices</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a> </p> <a href="https://publications.waset.org/abstracts/93458/enhanced-image-representation-for-deep-belief-network-classification-of-hyperspectral-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93458.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1993</span> Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taha%20Benarbia">Taha Benarbia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metrics <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20vehicles" title="automated vehicles">automated vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=connected%20vehicles" title=" connected vehicles"> connected vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=smart%20transportation%20network" title=" smart transportation network"> smart transportation network</a> </p> <a href="https://publications.waset.org/abstracts/168738/leveraging-automated-and-connected-vehicles-with-deep-learning-for-smart-transportation-network-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168738.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1992</span> Optimizing Machine Learning Through Python Based Image Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Srinidhi.%20A">Srinidhi. A</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Twinkle%20Hareendran"> Twinkle Hareendran</a>, <a href="https://publications.waset.org/abstracts/search?q=Vriksha%20Prakash"> Vriksha Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20applications" title=" machine learning applications"> machine learning applications</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a> </p> <a href="https://publications.waset.org/abstracts/193107/optimizing-machine-learning-through-python-based-image-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">13</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1991</span> Vehicle Detection and Tracking Using Deep Learning Techniques in Surveillance Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abe%20D.%20Desta">Abe D. Desta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study suggests a deep learning-based method for identifying and following moving objects in surveillance video. The proposed method uses a fast regional convolution neural network (F-RCNN) trained on a substantial dataset of vehicle images to first detect vehicles. A Kalman filter and a data association technique based on a Hungarian algorithm are then used to monitor the observed vehicles throughout time. However, in general, F-RCNN algorithms have been shown to be effective in achieving high detection accuracy and robustness in this research study. For example, in one study The study has shown that the vehicle detection and tracking, the system was able to achieve an accuracy of 97.4%. In this study, the F-RCNN algorithm was compared to other popular object detection algorithms and was found to outperform them in terms of both detection accuracy and speed. The presented system, which has application potential in actual surveillance systems, shows the usefulness of deep learning approaches in vehicle detection and tracking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-regional%20convolutional%20neural%20networks" title=" fast-regional convolutional neural networks"> fast-regional convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20tracking" title=" vehicle tracking"> vehicle tracking</a> </p> <a href="https://publications.waset.org/abstracts/164803/vehicle-detection-and-tracking-using-deep-learning-techniques-in-surveillance-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164803.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1990</span> Correlation between Speech Emotion Recognition Deep Learning Models and Noises</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leah%20Lee">Leah Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auto-encoder" title="auto-encoder">auto-encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20geometry%20group-16" title=" visual geometry group-16"> visual geometry group-16</a> </p> <a href="https://publications.waset.org/abstracts/170547/correlation-between-speech-emotion-recognition-deep-learning-models-and-noises" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170547.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1989</span> Deep Reinforcement Learning Model for Autonomous Driving</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boumaraf%20Malak">Boumaraf Malak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of intelligent transportation systems (ITS) and artificial intelligence (AI) are spurring us to pave the way for the widespread adoption of autonomous vehicles (AVs). This is open again opportunities for smart roads, smart traffic safety, and mobility comfort. A highly intelligent decision-making system is essential for autonomous driving around dense, dynamic objects. It must be able to handle complex road geometry and topology, as well as complex multiagent interactions, and closely follow higher-level commands such as routing information. Autonomous vehicles have become a very hot research topic in recent years due to their significant ability to reduce traffic accidents and personal injuries. Using new artificial intelligence-based technologies handles important functions in scene understanding, motion planning, decision making, vehicle control, social behavior, and communication for AV. This paper focuses only on deep reinforcement learning-based methods; it does not include traditional (flat) planar techniques, which have been the subject of extensive research in the past because reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. The DRL algorithm used so far found solutions to the four main problems of autonomous driving; in our paper, we highlight the challenges and point to possible future research directions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20reinforcement%20learning" title="deep reinforcement learning">deep reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous%20driving" title=" autonomous driving"> autonomous driving</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20deterministic%20policy%20gradient" title=" deep deterministic policy gradient"> deep deterministic policy gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20Q-learning" title=" deep Q-learning"> deep Q-learning</a> </p> <a href="https://publications.waset.org/abstracts/166548/deep-reinforcement-learning-model-for-autonomous-driving" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">85</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1988</span> Evaluation of Formability of AZ61 Magnesium Alloy at Elevated Temperatures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramezani%20M.">Ramezani M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Neitzert%20T."> Neitzert T.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates mechanical properties and formability of the AZ61 magnesium alloy at high temperatures. Tensile tests were performed at elevated temperatures of up to 400ºC. The results showed that as temperature increases, yield strength and ultimate tensile strength decrease significantly, while the material experiences an increase in ductility (maximum elongation before break). A finite element model has been developed to further investigate the formability of the AZ61 alloy by deep drawing a square cup. Effects of different process parameters such as punch and die geometry, forming speed and temperature as well as blank-holder force on deep drawability of the AZ61 alloy were studied and optimum values for these parameters are achieved which can be used as a design guide for deep drawing of this alloy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AZ61" title="AZ61">AZ61</a>, <a href="https://publications.waset.org/abstracts/search?q=formability" title=" formability"> formability</a>, <a href="https://publications.waset.org/abstracts/search?q=magnesium" title=" magnesium"> magnesium</a>, <a href="https://publications.waset.org/abstracts/search?q=mechanical%20properties" title=" mechanical properties"> mechanical properties</a> </p> <a href="https://publications.waset.org/abstracts/23114/evaluation-of-formability-of-az61-magnesium-alloy-at-elevated-temperatures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23114.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">579</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=3" rel="prev">&lsaquo;</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=3">3</a></li> <li class="page-item active"><span class="page-link">4</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=70">70</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=71">71</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20parsing&amp;page=5" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10