CINXE.COM
Search results for: lane detection
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: lane detection</title> <meta name="description" content="Search results for: lane detection"> <meta name="keywords" content="lane detection"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="lane detection" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="lane detection"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3560</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: lane detection</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3470</span> Active Islanding Detection Method Using Intelligent Controller</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kuang-Hsiung%20Tan">Kuang-Hsiung Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih-Chan%20Hu"> Chih-Chan Hu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chien-Wu%20Lan"> Chien-Wu Lan</a>, <a href="https://publications.waset.org/abstracts/search?q=Shih-Sung%20Lin"> Shih-Sung Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Te-Jen%20Chang"> Te-Jen Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An active islanding detection method using disturbance signal injection with intelligent controller is proposed in this study. First, a DC\AC power inverter is emulated in the distributed generator (DG) system to implement the tracking control of active power, reactive power outputs and the islanding detection. The proposed active islanding detection method is based on injecting a disturbance signal into the power inverter system through the <em>d</em>-axis current which leads to a frequency deviation at the terminal of the <em>RLC</em> load when the utility power is disconnected. Moreover, in order to improve the transient and steady-state responses of the active power and reactive power outputs of the power inverter, and to further improve the performance of the islanding detection method, two probabilistic fuzzy neural networks (PFNN) are adopted to replace the traditional proportional-integral (PI) controllers for the tracking control and the islanding detection. Furthermore, the network structure and the online learning algorithm of the PFNN are introduced in detail. Finally, the feasibility and effectiveness of the tracking control and the proposed active islanding detection method are verified with experimental results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distributed%20generators" title="distributed generators">distributed generators</a>, <a href="https://publications.waset.org/abstracts/search?q=probabilistic%20fuzzy%20neural%20network" title=" probabilistic fuzzy neural network"> probabilistic fuzzy neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=islanding%20detection" title=" islanding detection"> islanding detection</a>, <a href="https://publications.waset.org/abstracts/search?q=non-detection%20zone" title=" non-detection zone"> non-detection zone</a> </p> <a href="https://publications.waset.org/abstracts/39253/active-islanding-detection-method-using-intelligent-controller" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3469</span> Structural Damage Detection Using Sensors Optimally Located</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Alberto%20Riveros">Carlos Alberto Riveros</a>, <a href="https://publications.waset.org/abstracts/search?q=Edwin%20Fabi%C3%A1n%20Garc%C3%ADa"> Edwin Fabián García</a>, <a href="https://publications.waset.org/abstracts/search?q=Javier%20Enrique%20Rivero"> Javier Enrique Rivero</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The measured data obtained from sensors in continuous monitoring of civil structures are mainly used for modal identification and damage detection. Therefore when modal identification analysis is carried out the quality in the identification of the modes will highly influence the damage detection results. It is also widely recognized that the usefulness of the measured data used for modal identification and damage detection is significantly influenced by the number and locations of sensors. The objective of this study is the numerical implementation of two widely known optimum sensor placement methods in beam-like structures <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optimum%20sensor%20placement" title="optimum sensor placement">optimum sensor placement</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20damage%20detection" title=" structural damage detection"> structural damage detection</a>, <a href="https://publications.waset.org/abstracts/search?q=modal%20identification" title=" modal identification"> modal identification</a>, <a href="https://publications.waset.org/abstracts/search?q=beam-like%20structures." title=" beam-like structures. "> beam-like structures. </a> </p> <a href="https://publications.waset.org/abstracts/15240/structural-damage-detection-using-sensors-optimally-located" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3468</span> GPU Based Real-Time Floating Object Detection System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jie%20Yang">Jie Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian-Min%20Meng"> Jian-Min Meng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A GPU-based floating object detection scheme is presented in this paper which is designed for floating mine detection tasks. This system uses contrast and motion information to eliminate as many false positives as possible while avoiding false negatives. The GPU computation platform is deployed to allow detecting objects in real-time. From the experimental results, it is shown that with certain configuration, the GPU-based scheme can speed up the computation up to one thousand times compared to the CPU-based scheme. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title="object detection">object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=GPU" title=" GPU"> GPU</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20estimation" title=" motion estimation"> motion estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=parallel%20processing" title=" parallel processing"> parallel processing</a> </p> <a href="https://publications.waset.org/abstracts/54425/gpu-based-real-time-floating-object-detection-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54425.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3467</span> Mixed Traffic Speed–Flow Behavior under Influence of Road Side Friction and Non-Motorized Vehicles: A Comparative Study of Arterial Roads in India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chetan%20R.%20Patel">Chetan R. Patel</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20J.%20Joshi"> G. J. Joshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study is carried out on six lane divided urban arterial road in Patna and Pune city of India. Both the road having distinct differences in terms of the vehicle composition and the road side parking. Arterial road in Patan city has 33% of non-motorized mode, whereas Pune arterial road dominated by 65% of Two wheeler. Also road side parking is observed in Patna city. The field studies using vidiographic techniques are carried out for traffic data collection. Data are extracted for one minute duration for vehicle composition, speed variation and flow rate on selected arterial road of the two cities. Speed flow relationship is developed and capacity is determine. Equivalency factor in terms of dynamic car unit is determine to represent the vehicle is single unit. The variation in the capacity due to side friction, presence of non motorized traffic and effective utilization of lane width is compared at concluding remarks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arterial%20road" title="arterial road">arterial road</a>, <a href="https://publications.waset.org/abstracts/search?q=capacity" title=" capacity"> capacity</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20equivalency%20factor" title=" dynamic equivalency factor"> dynamic equivalency factor</a>, <a href="https://publications.waset.org/abstracts/search?q=effect%20of%20non%20motorized%20mode" title=" effect of non motorized mode"> effect of non motorized mode</a>, <a href="https://publications.waset.org/abstracts/search?q=side%20friction" title=" side friction"> side friction</a> </p> <a href="https://publications.waset.org/abstracts/16039/mixed-traffic-speed-flow-behavior-under-influence-of-road-side-friction-and-non-motorized-vehicles-a-comparative-study-of-arterial-roads-in-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3466</span> Thermal Neutron Detection Efficiency as a Function of Film Thickness for Front and Back Irradiation Detector Devices Coated with ¹⁰B, ⁶LiF, and Pure Li Thin Films</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vedant%20Subhash">Vedant Subhash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses the physics of the detection of thermal neutrons using thin-film coated semiconductor detectors. The thermal neutron detection efficiency as a function of film thickness is calculated for the front and back irradiation detector devices coated with ¹⁰B, ⁶LiF, and pure Li thin films. The detection efficiency for back irradiation devices is 4.15% that is slightly higher than that for front irradiation detectors, 4.0% for ¹⁰B films of thickness 2.4μm. The theoretically calculated thermal neutron detection efficiency using ¹⁰B film thickness of 1.1 μm for the back irradiation device is 3.0367%, which has an offset of 0.0367% from the experimental value of 3.0%. The detection efficiency values are compared and proved consistent with the given calculations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detection%20efficiency" title="detection efficiency">detection efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=neutron%20detection" title=" neutron detection"> neutron detection</a>, <a href="https://publications.waset.org/abstracts/search?q=semiconductor%20detectors" title=" semiconductor detectors"> semiconductor detectors</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20neutrons" title=" thermal neutrons"> thermal neutrons</a> </p> <a href="https://publications.waset.org/abstracts/133906/thermal-neutron-detection-efficiency-as-a-function-of-film-thickness-for-front-and-back-irradiation-detector-devices-coated-with-1b-6lif-and-pure-li-thin-films" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133906.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3465</span> Incorporating Anomaly Detection in a Digital Twin Scenario Using Symbolic Regression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manuel%20Alves">Manuel Alves</a>, <a href="https://publications.waset.org/abstracts/search?q=Angelica%20Reis"> Angelica Reis</a>, <a href="https://publications.waset.org/abstracts/search?q=Armindo%20Lobo"> Armindo Lobo</a>, <a href="https://publications.waset.org/abstracts/search?q=Valdemar%20Leiras"> Valdemar Leiras</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In industry 4.0, it is common to have a lot of sensor data. In this deluge of data, hints of possible problems are difficult to spot. The digital twin concept aims to help answer this problem, but it is mainly used as a monitoring tool to handle the visualisation of data. Failure detection is of paramount importance in any industry, and it consumes a lot of resources. Any improvement in this regard is of tangible value to the organisation. The aim of this paper is to add the ability to forecast test failures, curtailing detection times. To achieve this, several anomaly detection algorithms were compared with a symbolic regression approach. To this end, Isolation Forest, One-Class SVM and an auto-encoder have been explored. For the symbolic regression PySR library was used. The first results show that this approach is valid and can be added to the tools available in this context as a low resource anomaly detection method since, after training, the only requirement is the calculation of a polynomial, a useful feature in the digital twin context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title="anomaly detection">anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20twin" title=" digital twin"> digital twin</a>, <a href="https://publications.waset.org/abstracts/search?q=industry%204.0" title=" industry 4.0"> industry 4.0</a>, <a href="https://publications.waset.org/abstracts/search?q=symbolic%20regression" title=" symbolic regression"> symbolic regression</a> </p> <a href="https://publications.waset.org/abstracts/151469/incorporating-anomaly-detection-in-a-digital-twin-scenario-using-symbolic-regression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151469.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">120</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3464</span> Fault Detection and Isolation in Attitude Control Subsystem of Spacecraft Formation Flying Using Extended Kalman Filters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghasemi">S. Ghasemi</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Khorasani"> K. Khorasani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of fault detection and isolation in the attitude control subsystem of spacecraft formation flying is considered. In order to design the fault detection method, an extended Kalman filter is utilized which is a nonlinear stochastic state estimation method. Three fault detection architectures, namely, centralized, decentralized, and semi-decentralized are designed based on the extended Kalman filters. Moreover, the residual generation and threshold selection techniques are proposed for these architectures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=component" title="component">component</a>, <a href="https://publications.waset.org/abstracts/search?q=formation%20flight%20of%20satellites" title=" formation flight of satellites"> formation flight of satellites</a>, <a href="https://publications.waset.org/abstracts/search?q=extended%20Kalman%20filter" title=" extended Kalman filter"> extended Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20detection%20and%20isolation" title=" fault detection and isolation"> fault detection and isolation</a>, <a href="https://publications.waset.org/abstracts/search?q=actuator%20fault" title=" actuator fault"> actuator fault</a> </p> <a href="https://publications.waset.org/abstracts/26418/fault-detection-and-isolation-in-attitude-control-subsystem-of-spacecraft-formation-flying-using-extended-kalman-filters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26418.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3463</span> Functional Variants Detection by RNAseq</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raffaele%20A.%20Calogero">Raffaele A. Calogero</a> </p> <p class="card-text"><strong>Abstract:</strong></p> RNAseq represents an attractive methodology for the detection of functional genomic variants. RNAseq results obtained from polyA+ RNA selection protocol (POLYA) and from exonic regions capturing protocol (ACCESS) indicate that ACCESS detects 10% more coding SNV/INDELs with respect to POLYA. ACCESS requires less reads for coding SNV detection with respect to POLYA. However, if the analysis aims at identifying SNV/INDELs also in the 5’ and 3’ UTRs, POLYA is definitively the preferred method. No particular advantage comes from ACCESS or POLYA in the detection of fusion transcripts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fusion%20transcripts" title="fusion transcripts">fusion transcripts</a>, <a href="https://publications.waset.org/abstracts/search?q=INDEL" title=" INDEL"> INDEL</a>, <a href="https://publications.waset.org/abstracts/search?q=RNA-seq" title=" RNA-seq"> RNA-seq</a>, <a href="https://publications.waset.org/abstracts/search?q=WES" title=" WES"> WES</a>, <a href="https://publications.waset.org/abstracts/search?q=SNV" title=" SNV"> SNV</a> </p> <a href="https://publications.waset.org/abstracts/57993/functional-variants-detection-by-rnaseq" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57993.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3462</span> Calculation of Detection Efficiency of Horizontal Large Volume Source Using Exvol Code</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Y.%20Kang">M. Y. Kang</a>, <a href="https://publications.waset.org/abstracts/search?q=Euntaek%20Yoon"> Euntaek Yoon</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20D.%20Choi"> H. D. Choi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the EXVol (Efficiency calculator for EXtended Voluminous source) code which is based on effective solid angle method. EXVol is possible to describe the source area as a non-uniform three-dimensional (x, y, z) source. And decompose and set it into several sets of volume units. Users can equally divide (x, y, z) coordinate system to calculate the detection efficiency at a specific position of a cylindrical volume source. By determining the detection efficiency for differential volume units, the total radiative absolute distribution and the correction factor of the detection efficiency can be obtained from the nondestructive measurement of the source. In order to check the performance of the EXVol code, Si ingot of 20 cm in diameter and 50 cm in height were used as a source. The detector was moved at the collimation geometry to calculate the detection efficiency at a specific position and compared with the experimental values. In this study, the performance of the EXVol code was extended to obtain the detection efficiency distribution at a specific position in a large volume source. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attenuation" title="attenuation">attenuation</a>, <a href="https://publications.waset.org/abstracts/search?q=EXVol" title=" EXVol"> EXVol</a>, <a href="https://publications.waset.org/abstracts/search?q=detection%20efficiency" title=" detection efficiency"> detection efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=volume%20source" title=" volume source"> volume source</a> </p> <a href="https://publications.waset.org/abstracts/97158/calculation-of-detection-efficiency-of-horizontal-large-volume-source-using-exvol-code" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97158.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3461</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3460</span> An Earth Mover’s Distance Algorithm Based DDoS Detection Mechanism in SDN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Zhou">Yang Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Kangfeng%20Zheng"> Kangfeng Zheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Ni"> Wei Ni</a>, <a href="https://publications.waset.org/abstracts/search?q=Ren%20Ping%20Liu"> Ren Ping Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Software-defined networking (SDN) provides a solution for scalable network framework with decoupled control and data plane. However, this architecture also induces a particular distributed denial-of-service (DDoS) attack that can affect or even overwhelm the SDN network. DDoS attack detection problem has to date been mostly researched as entropy comparison problem. However, this problem lacks the utilization of SDN, and the results are not accurate. In this paper, we propose a DDoS attack detection method, which interprets DDoS detection as a signature matching problem and is formulated as Earth Mover’s Distance (EMD) model. Considering the feasibility and accuracy, we further propose to define the cost function of EMD to be a generalized Kullback-Leibler divergence. Simulation results show that our proposed method can detect DDoS attacks by comparing EMD values with the ones computed in the case without attacks. Moreover, our method can significantly increase the true positive rate of detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DDoS%20detection" title="DDoS detection">DDoS detection</a>, <a href="https://publications.waset.org/abstracts/search?q=EMD" title=" EMD"> EMD</a>, <a href="https://publications.waset.org/abstracts/search?q=relative%20entropy" title=" relative entropy"> relative entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=SDN" title=" SDN"> SDN</a> </p> <a href="https://publications.waset.org/abstracts/90528/an-earth-movers-distance-algorithm-based-ddos-detection-mechanism-in-sdn" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90528.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3459</span> Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emhimed%20Saffor">Emhimed Saffor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection "> edge detection </a> </p> <a href="https://publications.waset.org/abstracts/44926/subjective-evaluation-of-mathematical-morphology-edge-detection-on-computed-tomography-ct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3458</span> Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victoria%20Siriaki%20Jorry">Victoria Siriaki Jorry</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20S.%20Mbalawata"> I. S. Mbalawata</a>, <a href="https://publications.waset.org/abstracts/search?q=Hayong%20Shin"> Hayong Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=average%20run%20length" title="average run length">average run length</a>, <a href="https://publications.waset.org/abstracts/search?q=CUSUM%20control%20chart" title=" CUSUM control chart"> CUSUM control chart</a>, <a href="https://publications.waset.org/abstracts/search?q=gradual%20change%20detection" title=" gradual change detection"> gradual change detection</a>, <a href="https://publications.waset.org/abstracts/search?q=likelihood%20ratio%20test" title=" likelihood ratio test"> likelihood ratio test</a> </p> <a href="https://publications.waset.org/abstracts/70339/modified-cusum-algorithm-for-gradual-change-detection-in-a-time-series-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70339.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">299</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3457</span> A Novel Spectral Index for Automatic Shadow Detection in Urban Mapping Based on WorldView-2 Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaveh%20Shahi">Kaveh Shahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Helmi%20Z.%20M.%20Shafri"> Helmi Z. M. Shafri</a>, <a href="https://publications.waset.org/abstracts/search?q=Ebrahim%20Taherzadeh"> Ebrahim Taherzadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In remote sensing, shadow causes problems in many applications such as change detection and classification. It is caused by objects which are elevated, thus can directly affect the accuracy of information. For these reasons, it is very important to detect shadows particularly in urban high spatial resolution imagery which created a significant problem. This paper focuses on automatic shadow detection based on a new spectral index for multispectral imagery known as Shadow Detection Index (SDI). The new spectral index was tested on different areas of World-View 2 images and the results demonstrated that the new spectral index has a massive potential to extract shadows effectively and automatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spectral%20index" title="spectral index">spectral index</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20detection" title=" shadow detection"> shadow detection</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing%20images" title=" remote sensing images"> remote sensing images</a>, <a href="https://publications.waset.org/abstracts/search?q=World-View%202" title=" World-View 2"> World-View 2</a> </p> <a href="https://publications.waset.org/abstracts/13500/a-novel-spectral-index-for-automatic-shadow-detection-in-urban-mapping-based-on-worldview-2-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13500.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3456</span> An Architectural Model for APT Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nam-Uk%20Kim">Nam-Uk Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Hwan%20Kim"> Sung-Hwan Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Tai-Myoung%20Chung"> Tai-Myoung Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Typical security management systems are not suitable for detecting APT attack, because they cannot draw the big picture from trivial events of security solutions. Although SIEM solutions have security analysis engine for that, their security analysis mechanisms need to be verified in academic field. Although this paper proposes merely an architectural model for APT detection, we will keep studying on correlation analysis mechanism in the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20persistent%20threat" title="advanced persistent threat">advanced persistent threat</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining "> data mining </a> </p> <a href="https://publications.waset.org/abstracts/23009/an-architectural-model-for-apt-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">528</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3455</span> Efficient Ground Targets Detection Using Compressive Sensing in Ground-Based Synthetic-Aperture Radar (SAR) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gherbi%20Nabil">Gherbi Nabil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection of ground targets in SAR radar images is an important area for radar information processing. In the literature, various algorithms have been discussed in this context. However, most of them are of low robustness and accuracy. To this end, we discuss target detection in SAR images based on compressive sensing. Firstly, traditional SAR image target detection algorithms are discussed, and their limitations are highlighted. Secondly, a compressive sensing method is proposed based on the sparsity of SAR images. Next, the detection problem is solved using Multiple Measurements Vector configuration. Furthermore, a robust Alternating Direction Method of Multipliers (ADMM) is developed to solve the optimization problem. Finally, the detection results obtained using raw complex data are presented. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressive%20sensing" title="compressive sensing">compressive sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=raw%20complex%20data" title=" raw complex data"> raw complex data</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20radar" title=" synthetic aperture radar"> synthetic aperture radar</a>, <a href="https://publications.waset.org/abstracts/search?q=ADMM" title=" ADMM"> ADMM</a> </p> <a href="https://publications.waset.org/abstracts/191958/efficient-ground-targets-detection-using-compressive-sensing-in-ground-based-synthetic-aperture-radar-sar-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">19</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3454</span> Stereo Camera Based Speed-Hump Detection Process for Real Time Driving Assistance System in the Daytime</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyun-Koo%20Kim">Hyun-Koo Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Hun%20Kim"> Yong-Hun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Soo-Young%20Suk"> Soo-Young Suk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an effective speed hump detection process at the day-time. we focus only on round types of speed humps in the day-time dynamic road environment. The proposed speed hump detection scheme consists mainly of two process as stereo matching and speed hump detection process. Our proposed process focuses to speed hump detection process. Speed hump detection process consist of noise reduction step, data fusion step, and speed hemp detection step. The proposed system is tested on Intel Core CPU with 2.80 GHz and 4 GB RAM tested in the urban road environments. The frame rate of test videos is 30 frames per second and the size of each frame of grabbed image sequences is 1280 pixels by 670 pixels. Using object-marked sequences acquired with an on-vehicle camera, we recorded speed humps and non-speed humps samples. Result of the tests, our proposed method can be applied in real-time systems by computation time is 13 ms. For instance; our proposed method reaches 96.1 %. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title="data fusion">data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=round%20types%20speed%20hump" title=" round types speed hump"> round types speed hump</a>, <a href="https://publications.waset.org/abstracts/search?q=speed%20hump%20detection" title=" speed hump detection"> speed hump detection</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20filter" title=" surface filter"> surface filter</a> </p> <a href="https://publications.waset.org/abstracts/15368/stereo-camera-based-speed-hump-detection-process-for-real-time-driving-assistance-system-in-the-daytime" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15368.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">510</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3453</span> DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kun%20Xu">Kun Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Xu"> Yuan Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jia%20Qiao"> Jia Qiao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=document%20detection" title="document detection">document detection</a>, <a href="https://publications.waset.org/abstracts/search?q=corner%20detection" title=" corner detection"> corner detection</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=lightweight" title=" lightweight"> lightweight</a> </p> <a href="https://publications.waset.org/abstracts/152145/dcdnet-lightweight-document-corner-detection-network-based-on-attention-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152145.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3452</span> TMIF: Transformer-Based Multi-Modal Interactive Fusion for Rumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiandong%20Lv">Jiandong Lv</a>, <a href="https://publications.waset.org/abstracts/search?q=Xingang%20Wang"> Xingang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuiling%20Shao"> Cuiling Shao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid development of social media platforms has made it one of the important news sources. While it provides people with convenient real-time communication channels, fake news and rumors are also spread rapidly through social media platforms, misleading the public and even causing bad social impact in view of the slow speed and poor consistency of artificial rumor detection. We propose an end-to-end rumor detection model-TIMF, which captures the dependencies between multimodal data based on the interactive attention mechanism, uses a transformer for cross-modal feature sequence mapping and combines hybrid fusion strategies to obtain decision results. This paper verifies two multi-modal rumor detection datasets and proves the superior performance and early detection performance of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hybrid%20fusion" title="hybrid fusion">hybrid fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion" title=" multimodal fusion"> multimodal fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=rumor%20detection" title=" rumor detection"> rumor detection</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a> </p> <a href="https://publications.waset.org/abstracts/141806/tmif-transformer-based-multi-modal-interactive-fusion-for-rumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141806.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3451</span> Real-Time Pedestrian Detection Method Based on Improved YOLOv3</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jingting%20Luo">Jingting Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong%20Wang"> Yong Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying%20Wang"> Ying Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pedestrian detection in image or video data is a very important and challenging task in security surveillance. The difficulty of this task is to locate and detect pedestrians of different scales in complex scenes accurately. To solve these problems, a deep neural network (RT-YOLOv3) is proposed to realize real-time pedestrian detection at different scales in security monitoring. RT-YOLOv3 improves the traditional YOLOv3 algorithm. Firstly, the deep residual network is added to extract vehicle features. Then six convolutional neural networks with different scales are designed and fused with the corresponding scale feature maps in the residual network to form the final feature pyramid to perform pedestrian detection tasks. This method can better characterize pedestrians. In order to further improve the accuracy and generalization ability of the model, a hybrid pedestrian data set training method is used to extract pedestrian data from the VOC data set and train with the INRIA pedestrian data set. Experiments show that the proposed RT-YOLOv3 method achieves 93.57% accuracy of mAP (mean average precision) and 46.52f/s (number of frames per second). In terms of accuracy, RT-YOLOv3 performs better than Fast R-CNN, Faster R-CNN, YOLO, SSD, YOLOv2, and YOLOv3. This method reduces the missed detection rate and false detection rate, improves the positioning accuracy, and meets the requirements of real-time detection of pedestrian objects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pedestrian%20detection" title="pedestrian detection">pedestrian detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20detection" title=" feature detection"> feature detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20detection" title=" real-time detection"> real-time detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv3" title=" YOLOv3"> YOLOv3</a> </p> <a href="https://publications.waset.org/abstracts/114446/real-time-pedestrian-detection-method-based-on-improved-yolov3" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114446.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3450</span> Comparison of Vessel Detection in Standard vs Ultra-WideField Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maher%20un%20Nisa">Maher un Nisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahsan%20Khawaja"> Ahsan Khawaja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal imaging with Ultra-WideField (UWF) view technology has opened up new avenues in the field of retinal pathology detection. Recent developments in retinal imaging such as Optos California Imaging Device helps in acquiring high resolution images of the retina to help the Ophthalmologists in diagnosing and analyzing eye related pathologies more accurately. This paper investigates the acquired retinal details by comparing vessel detection in standard 450 color fundus images with the state of the art 2000 UWF retinal images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fundus" title="color fundus">color fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=ultra-widefield" title=" ultra-widefield"> ultra-widefield</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel%20detection" title=" vessel detection"> vessel detection</a> </p> <a href="https://publications.waset.org/abstracts/33520/comparison-of-vessel-detection-in-standard-vs-ultra-widefield-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3449</span> Detection of Clipped Fragments in Speech Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sergei%20Aleinik">Sergei Aleinik</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuri%20Matveev"> Yuri Matveev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper a novel method for the detection of clipping in speech signals is described. It is shown that the new method has better performance than known clipping detection methods, is easy to implement, and is robust to changes in signal amplitude, size of data, etc. Statistical simulation results are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clipping" title="clipping">clipping</a>, <a href="https://publications.waset.org/abstracts/search?q=clipped%20signal" title=" clipped signal"> clipped signal</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20signal%20processing" title=" speech signal processing"> speech signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20signal%20processing" title=" digital signal processing"> digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/4816/detection-of-clipped-fragments-in-speech-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">392</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3448</span> Evaluating Performance of an Anomaly Detection Module with Artificial Neural Network Implementation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edward%20Guill%C3%A9n">Edward Guillén</a>, <a href="https://publications.waset.org/abstracts/search?q=Jhordany%20Rodriguez"> Jhordany Rodriguez</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20P%C3%A1ez"> Rafael Páez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Anomaly detection techniques have been focused on two main components: data extraction and selection and the second one is the analysis performed over the obtained data. The goal of this paper is to analyze the influence that each of these components has over the system performance by evaluating detection over network scenarios with different setups. The independent variables are as follows: the number of system inputs, the way the inputs are codified and the complexity of the analysis techniques. For the analysis, some approaches of artificial neural networks are implemented with different number of layers. The obtained results show the influence that each of these variables has in the system performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=network%20intrusion%20detection" title="network intrusion detection">network intrusion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection%20module" title="anomaly detection module">anomaly detection module</a> </p> <a href="https://publications.waset.org/abstracts/2047/evaluating-performance-of-an-anomaly-detection-module-with-artificial-neural-network-implementation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">343</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3447</span> Automatic Change Detection for High-Resolution Satellite Images of Urban and Suburban Areas</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Antigoni%20Panagiotopoulou">Antigoni Panagiotopoulou</a>, <a href="https://publications.waset.org/abstracts/search?q=Lemonia%20Ragia"> Lemonia Ragia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High-resolution satellite images can provide detailed information about change detection on the earth. In the present work, QuickBird images of spatial resolution 60 cm/pixel and WorldView images of resolution 30 cm/pixel are utilized to perform automatic change detection in urban and suburban areas of Crete, Greece. There is a relative time difference of 13 years among the satellite images. Multiindex scene representation is applied on the images to classify the scene into buildings, vegetation, water and ground. Then, automatic change detection is made possible by pixel-per-pixel comparison of the classified multi-temporal images. The vegetation index and the water index which have been developed in this study prove effective. Furthermore, the proposed change detection approach not only indicates whether changes have taken place or not but also provides specific information relative to the types of changes. Experimentations with other different scenes in the future could help optimize the proposed spectral indices as well as the entire change detection methodology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title="change detection">change detection</a>, <a href="https://publications.waset.org/abstracts/search?q=multiindex%20scene%20representation" title=" multiindex scene representation"> multiindex scene representation</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20index" title=" spectral index"> spectral index</a>, <a href="https://publications.waset.org/abstracts/search?q=QuickBird" title=" QuickBird"> QuickBird</a>, <a href="https://publications.waset.org/abstracts/search?q=WorldView" title=" WorldView"> WorldView</a> </p> <a href="https://publications.waset.org/abstracts/132460/automatic-change-detection-for-high-resolution-satellite-images-of-urban-and-suburban-areas" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132460.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3446</span> The Laser Line Detection for Autonomous Mapping Based on Color Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pavel%20Chmelar">Pavel Chmelar</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Dobrovolny"> Martin Dobrovolny</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Laser projection or laser footprint detection is today widely used in many fields of robotics, measurement, or electronics. The system accuracy strictly depends on precise laser footprint detection on target objects. This article deals with the laser line detection based on the RGB segmentation and the component labeling. As a measurement device was used the developed optical rangefinder. The optical rangefinder is equipped with vertical sweeping of the laser beam and high quality camera. This system was developed mainly for automatic exploration and mapping of unknown spaces. In the first section is presented a new detection algorithm. In the second section are presented measurements results. The measurements were performed in variable light conditions in interiors. The last part of the article present achieved results and their differences between day and night measurements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20segmentation" title="color segmentation">color segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=component%20labelling" title=" component labelling"> component labelling</a>, <a href="https://publications.waset.org/abstracts/search?q=laser%20line%20detection" title=" laser line detection"> laser line detection</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20mapping" title=" automatic mapping"> automatic mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20measurement" title=" distance measurement"> distance measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20map" title=" vector map"> vector map</a> </p> <a href="https://publications.waset.org/abstracts/1789/the-laser-line-detection-for-autonomous-mapping-based-on-color-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3445</span> A Background Subtraction Based Moving Object Detection Around the Host Vehicle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Lim">Hyojin Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuong%20Nguyen%20Khac"> Cuong Nguyen Khac</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose moving object detection method which is helpful for driver to safely take his/her car out of parking lot. When moving objects such as motorbikes, pedestrians, the other cars and some obstacles are detected at the rear-side of host vehicle, the proposed algorithm can provide to driver warning. We assume that the host vehicle is just before departure. Gaussian Mixture Model (GMM) based background subtraction is basically applied. Pre-processing such as smoothing and post-processing as morphological filtering are added.We examine “which color space has better performance for detection of moving objects?” Three color spaces including RGB, YCbCr, and Y are applied and compared, in terms of detection rate. Through simulation, we prove that RGB space is more suitable for moving object detection based on background subtraction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gaussian%20mixture%20model" title="gaussian mixture model">gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title=" background subtraction"> background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title=" moving object detection"> moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20space" title=" color space"> color space</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20filtering" title=" morphological filtering"> morphological filtering</a> </p> <a href="https://publications.waset.org/abstracts/32650/a-background-subtraction-based-moving-object-detection-around-the-host-vehicle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">617</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3444</span> The Comparation of Limits of Detection of Lateral Flow Immunochromatographic Strips of Different Types of Mycotoxins</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xinyi%20Zhao">Xinyi Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Furong%20Tian"> Furong Tian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mycotoxins are secondary metabolic products of fungi. These are poisonous, carcinogens and mutagens in nature and pose a serious health threat to both humans and animals, causing severe illnesses and even deaths. The rapid, simple and cheap detection methods of mycotoxins are of immense importance and in great demand in the food and beverage industry as well as in agriculture and environmental monitoring. Lateral flow immunochromatographic strips (ICSTs) have been widely used in food safety, environment monitoring. Forty-six papers were identified and reviewed on Google Scholar and Scopus for their limit of detection and nanomaterial on Lateral flow immunochromatographic strips on different types of mycotoxins. The papers were dated 2001-2021. Twenty five papers were compared to identify the lowest limit of detection of among different mycotoxins (Aflatoxin B1: 10, Zearalenone:5, Fumonisin B1: 5, Trichothecene-A: 5). Most of these highly sensitive strips are competitive. Sandwich structure are usually used in large scale detection. In conclusion, the mycotoxin receives that most researches is aflatoxin B1 and its limit of detection is the lowest. Gold-nanopaticle based immunochromatographic test strips has the lowest limit of detection. Five papers involve smartphone detection and they all detect aflatoxin B1 with gold nanoparticles. In these papers, quantitative concentration results can be obtained when the user uploads the photograph of test lines using the smartphone application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aflatoxin%20B1" title="aflatoxin B1">aflatoxin B1</a>, <a href="https://publications.waset.org/abstracts/search?q=limit%20of%20detection" title=" limit of detection"> limit of detection</a>, <a href="https://publications.waset.org/abstracts/search?q=gold%20nanoparticle" title=" gold nanoparticle"> gold nanoparticle</a>, <a href="https://publications.waset.org/abstracts/search?q=lateral%20flow%20immunochromatographic%20strips" title=" lateral flow immunochromatographic strips"> lateral flow immunochromatographic strips</a>, <a href="https://publications.waset.org/abstracts/search?q=mycotoxins" title=" mycotoxins"> mycotoxins</a> </p> <a href="https://publications.waset.org/abstracts/139268/the-comparation-of-limits-of-detection-of-lateral-flow-immunochromatographic-strips-of-different-types-of-mycotoxins" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139268.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3443</span> Paper-Based Detection Using Synthetic Gene Circuits</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vanessa%20Funk">Vanessa Funk</a>, <a href="https://publications.waset.org/abstracts/search?q=Steven%20Blum"> Steven Blum</a>, <a href="https://publications.waset.org/abstracts/search?q=Stephanie%20Cole"> Stephanie Cole</a>, <a href="https://publications.waset.org/abstracts/search?q=Jorge%20Maciel"> Jorge Maciel</a>, <a href="https://publications.waset.org/abstracts/search?q=Matthew%20Lux"> Matthew Lux</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Paper-based synthetic gene circuits offer a new paradigm for programmable, fieldable biodetection. We demonstrate that by freeze-drying gene circuits with in vitro expression machinery, we can use complimentary RNA sequences to trigger colorimetric changes upon rehydration. We have successfully utilized both green fluorescent protein and luciferase-based reporters for easy visualization purposes in solution. Through several efforts, we are aiming to use this new platform technology to address a variety of needs in portable detection by demonstrating several more expression and reporter systems for detection functions on paper. In addition to RNA-based biodetection, we are exploring the use of various mechanisms that cells use to respond to environmental conditions to move towards all-hazards detection. Examples include explosives, heavy metals for water quality, and toxic chemicals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cell-free%20lysates" title="cell-free lysates">cell-free lysates</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=gene%20circuits" title=" gene circuits"> gene circuits</a>, <a href="https://publications.waset.org/abstracts/search?q=in%20vitro" title=" in vitro"> in vitro</a> </p> <a href="https://publications.waset.org/abstracts/71047/paper-based-detection-using-synthetic-gene-circuits" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">394</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3442</span> A Highly Sensitive Dip Strip for Detection of Phosphate in Water</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hojat%20Heidari-Bafroui">Hojat Heidari-Bafroui</a>, <a href="https://publications.waset.org/abstracts/search?q=Amer%20Charbaji"> Amer Charbaji</a>, <a href="https://publications.waset.org/abstracts/search?q=Constantine%20Anagnostopoulos"> Constantine Anagnostopoulos</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Faghri"> Mohammad Faghri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Phosphorus is an essential nutrient for plant life which is most frequently found as phosphate in water. Once phosphate is found in abundance in surface water, a series of adverse effects on an ecosystem can be initiated. Therefore, a portable and reliable method is needed to monitor the phosphate concentrations in the field. In this paper, an inexpensive dip strip device with the ascorbic acid/antimony reagent dried on blotting paper along with wet chemistry is developed for the detection of low concentrations of phosphate in water. Ammonium molybdate and sulfuric acid are separately stored in liquid form so as to improve significantly the lifetime of the device and enhance the reproducibility of the device’s performance. The limit of detection and quantification for the optimized device are 0.134 ppm and 0.472 ppm for phosphate in water, respectively. The device’s shelf life, storage conditions, and limit of detection are superior to what has been previously reported for the paper-based phosphate detection devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phosphate%20detection" title="phosphate detection">phosphate detection</a>, <a href="https://publications.waset.org/abstracts/search?q=paper-based%20device" title=" paper-based device"> paper-based device</a>, <a href="https://publications.waset.org/abstracts/search?q=molybdenum%20blue%20method" title=" molybdenum blue method"> molybdenum blue method</a>, <a href="https://publications.waset.org/abstracts/search?q=colorimetric%20assay" title=" colorimetric assay"> colorimetric assay</a> </p> <a href="https://publications.waset.org/abstracts/134960/a-highly-sensitive-dip-strip-for-detection-of-phosphate-in-water" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134960.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3441</span> Automated Tracking and Statistics of Vehicles at the Signalized Intersection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Zhang">Qiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojian%20Hu1"> Xiaojian Hu1</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Intersection is the place where vehicles and pedestrians must pass through, turn and evacuate. Obtaining the motion data of vehicles near the intersection is of great significance for transportation research. Since there are usually many targets and there are more conflicts between targets, this makes it difficult to obtain vehicle motion parameters in traffic videos of intersections. According to the characteristics of traffic videos, this paper applies video technology to realize the automated track, count and trajectory extraction of vehicles to collect traffic data by roadside surveillance cameras installed near the intersections. Based on the video recognition method, the vehicles in each lane near the intersection are tracked with extracting trajectory and counted respectively in various degrees of occlusion and visibility. The performances are compared with current recognized CPU-based algorithms of real-time tracking-by-detection. The speed of the presented system is higher than the others and the system has a better real-time performance. The accuracy of direction has reached about 94.99% on average, and the accuracy of classification and statistics has reached about 75.12% on average. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tracking%20and%20statistics" title="tracking and statistics">tracking and statistics</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle" title=" vehicle"> vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=signalized%20intersection" title=" signalized intersection"> signalized intersection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20parameter" title=" motion parameter"> motion parameter</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory" title=" trajectory"> trajectory</a> </p> <a href="https://publications.waset.org/abstracts/136436/automated-tracking-and-statistics-of-vehicles-at-the-signalized-intersection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=3" rel="prev">‹</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=3">3</a></li> <li class="page-item active"><span class="page-link">4</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=118">118</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=119">119</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lane%20detection&page=5" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>