CINXE.COM
Search results for: counting people
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: counting people</title> <meta name="description" content="Search results for: counting people"> <meta name="keywords" content="counting people"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="counting people" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="counting people"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 7303</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: counting people</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7303</span> Counting People Utilizing Space-Time Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Elmarhomy">Ahmed Elmarhomy</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An automated method for counting passerby has been proposed using virtual-vertical measurement lines. Space-time image is representing the human regions which are treated using the segmentation process. Different color space has been used to perform the template matching. A proper template matching has been achieved to determine direction and speed of passing people. Distinguish one or two passersby has been investigated using a correlation between passerby speed and the human-pixel area. Finally, the effectiveness of the presented method has been experimentally verified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=counting%20people" title="counting people">counting people</a>, <a href="https://publications.waset.org/abstracts/search?q=measurement%20line" title=" measurement line"> measurement line</a>, <a href="https://publications.waset.org/abstracts/search?q=space-time%20image" title=" space-time image"> space-time image</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a> </p> <a href="https://publications.waset.org/abstracts/46877/counting-people-utilizing-space-time-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46877.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7302</span> Modelling of Passengers Exchange between Trains and Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guillaume%20Craveur">Guillaume Craveur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The evaluation of the passenger exchange time is necessary for railway operators in order to optimize and dimension rail traffic. Several influential parameters are identified and studied. Each parameter leads to a modeling completed with the buildingEXODUS software. The objective is the modelling of passenger exchanges measured by passenger counting. Population size is dimensioned using passenger counting files which are a report of the train service and contain following useful informations: number of passengers who get on and leave the train, exchange time. These information are collected by sensors placed at the top of each train door. With passenger counting files it is possible to know how many people are engaged in the exchange and how long is the exchange, but it is not possible to know passenger flow of the door. All the information about observed exchanges are thus not available. For this reason and in order to minimize inaccuracies, only short exchanges (less than 30 seconds) with a maximum of people are performed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=passengers%20exchange" title="passengers exchange">passengers exchange</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20tools" title=" numerical tools"> numerical tools</a>, <a href="https://publications.waset.org/abstracts/search?q=rolling%20stock" title=" rolling stock"> rolling stock</a>, <a href="https://publications.waset.org/abstracts/search?q=platforms" title=" platforms"> platforms</a> </p> <a href="https://publications.waset.org/abstracts/72046/modelling-of-passengers-exchange-between-trains-and-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72046.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">228</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7301</span> Efficient Passenger Counting in Public Transport Based on Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chonlakorn%20Wiboonsiriruk">Chonlakorn Wiboonsiriruk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ekachai%20Phaisangittisagul"> Ekachai Phaisangittisagul</a>, <a href="https://publications.waset.org/abstracts/search?q=Chadchai%20Srisurangkul"> Chadchai Srisurangkul</a>, <a href="https://publications.waset.org/abstracts/search?q=Itsuo%20Kumazawa"> Itsuo Kumazawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Public transportation is a crucial aspect of passenger transportation, with buses playing a vital role in the transportation service. Passenger counting is an essential tool for organizing and managing transportation services. However, manual counting is a tedious and time-consuming task, which is why computer vision algorithms are being utilized to make the process more efficient. In this study, different object detection algorithms combined with passenger tracking are investigated to compare passenger counting performance. The system employs the EfficientDet algorithm, which has demonstrated superior performance in terms of speed and accuracy. Our results show that the proposed system can accurately count passengers in varying conditions with an accuracy of 94%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=passenger%20counting" title=" passenger counting"> passenger counting</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20transportation" title=" public transportation"> public transportation</a> </p> <a href="https://publications.waset.org/abstracts/167734/efficient-passenger-counting-in-public-transport-based-on-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7300</span> Sorting Fish by Hu Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20M.%20Hern%C3%A1ndez-Ontiveros">J. M. Hernández-Ontiveros</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20E.%20Garc%C3%ADa-Guerrero"> E. E. García-Guerrero</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Inzunza-Gonz%C3%A1lez"> E. Inzunza-González</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20R.%20L%C3%B3pez-Bonilla"> O. R. López-Bonilla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the implementation of an algorithm that identifies and accounts different fish species: Catfish, Sea bream, Sawfish, Tilapia, and Totoaba. The main contribution of the method is the fusion of the characteristics of invariance to the position, rotation and scale of the Hu moments, with the proper counting of fish. The identification and counting is performed, from an image under different noise conditions. From the experimental results obtained, it is inferred the potentiality of the proposed algorithm to be applied in different scenarios of aquaculture production. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=counting%20fish" title="counting fish">counting fish</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20processing" title=" digital image processing"> digital image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=invariant%20moments" title=" invariant moments"> invariant moments</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/27652/sorting-fish-by-hu-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27652.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">409</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7299</span> Robust and Real-Time Traffic Counting System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hossam%20M.%20Moftah">Hossam M. Moftah</a>, <a href="https://publications.waset.org/abstracts/search?q=Aboul%20Ella%20Hassanien"> Aboul Ella Hassanien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the recent years the importance of automatic traffic control has increased due to the traffic jams problem especially in big cities for signal control and efficient traffic management. Traffic counting as a kind of traffic control is important to know the road traffic density in real time. This paper presents a fast and robust traffic counting system using different image processing techniques. The proposed system is composed of the following four fundamental building phases: image acquisition, pre-processing, object detection, and finally counting the connected objects. The object detection phase is comprised of the following five steps: subtracting the background, converting the image to binary, closing gaps and connecting nearby blobs, image smoothing to remove noises and very small objects, and detecting the connected objects. Experimental results show the great success of the proposed approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20counting" title="traffic counting">traffic counting</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20management" title=" traffic management"> traffic management</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/43835/robust-and-real-time-traffic-counting-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43835.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7298</span> Exploring Counting Methods for the Vertices of Certain Polyhedra with Uncertainties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sammani%20Danwawu%20Abdullahi">Sammani Danwawu Abdullahi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vertex Enumeration Algorithms explore the methods and procedures of generating the vertices of general polyhedra formed by system of equations or inequalities. These problems of enumerating the extreme points (vertices) of general polyhedra are shown to be NP-Hard. This lead to exploring how to count the vertices of general polyhedra without listing them. This is also shown to be #P-Complete. Some fully polynomial randomized approximation schemes (fpras) of counting the vertices of some special classes of polyhedra associated with Down-Sets, Independent Sets, 2-Knapsack problems and 2 x n transportation problems are presented together with some discovered open problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=counting%20with%20uncertainties" title="counting with uncertainties">counting with uncertainties</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20programming" title=" mathematical programming"> mathematical programming</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=vertex%20enumeration" title=" vertex enumeration"> vertex enumeration</a> </p> <a href="https://publications.waset.org/abstracts/38580/exploring-counting-methods-for-the-vertices-of-certain-polyhedra-with-uncertainties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7297</span> Count of Trees in East Africa with Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nubwimana%20Rachel">Nubwimana Rachel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mugabowindekwe%20Maurice"> Mugabowindekwe Maurice</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20counting" title=" tree counting"> tree counting</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/177935/count-of-trees-in-east-africa-with-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177935.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7296</span> Numerical Implementation and Testing of Fractioning Estimator Method for the Box-Counting Dimension of Fractal Objects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abraham%20Ter%C3%A1n%20Salcedo">Abraham Terán Salcedo</a>, <a href="https://publications.waset.org/abstracts/search?q=Didier%20Samayoa%20Ochoa"> Didier Samayoa Ochoa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a numerical implementation of a method for estimating the box-counting dimension of self-avoiding curves on a planar space, fractal objects captured on digital images; this method is named fractioning estimator. Classical methods of digital image processing, such as noise filtering, contrast manipulation, and thresholding, among others, are used in order to obtain binary images that are suitable for performing the necessary computations of the fractioning estimator. A user interface is developed for performing the image processing operations and testing the fractioning estimator on different captured images of real-life fractal objects. To analyze the results, the estimations obtained through the fractioning estimator are compared to the results obtained through other methods that are already implemented on different available software for computing and estimating the box-counting dimension. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=box-counting" title="box-counting">box-counting</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20processing" title=" digital image processing"> digital image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20dimension" title=" fractal dimension"> fractal dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20method" title=" numerical method"> numerical method</a> </p> <a href="https://publications.waset.org/abstracts/160901/numerical-implementation-and-testing-of-fractioning-estimator-method-for-the-box-counting-dimension-of-fractal-objects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160901.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7295</span> Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nata%C5%A1a%20Todorovi%C4%87">Nataša Todorović</a>, <a href="https://publications.waset.org/abstracts/search?q=Jovana%20Nikolov"> Jovana Nikolov</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivana%20Stojkovi%C4%87"> Ivana Stojković</a>, <a href="https://publications.waset.org/abstracts/search?q=Milan%20Vrane%C5%A1"> Milan Vraneš</a>, <a href="https://publications.waset.org/abstracts/search?q=Jovana%20Pani%C4%87"> Jovana Panić</a>, <a href="https://publications.waset.org/abstracts/search?q=Slobodan%20Gad%C5%BEuri%C4%87"> Slobodan Gadžurić</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=liquid%20scintillation%20counting" title="liquid scintillation counting">liquid scintillation counting</a>, <a href="https://publications.waset.org/abstracts/search?q=ionic%20liquids" title=" ionic liquids"> ionic liquids</a>, <a href="https://publications.waset.org/abstracts/search?q=Cherenkov%20counting" title=" Cherenkov counting"> Cherenkov counting</a>, <a href="https://publications.waset.org/abstracts/search?q=%C2%B2%C2%B9%E2%81%B0PB%2F%C2%B2%C2%B9%E2%81%B0BI%20in%20water" title=" ²¹⁰PB/²¹⁰BI in water"> ²¹⁰PB/²¹⁰BI in water</a> </p> <a href="https://publications.waset.org/abstracts/152211/investigation-of-several-new-ionic-liquids-behaviour-during-21pb21bi-cherenkov-counting-in-waters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152211.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7294</span> Microfluidic Impedimetric Biochip and Related Methods for Measurement Chip Manufacture and Counting Cells</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amina%20Farooq">Amina Farooq</a>, <a href="https://publications.waset.org/abstracts/search?q=Nauman%20Zafar%20Butt"> Nauman Zafar Butt</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper is about methods and tools for counting particles of interest, such as cells. A microfluidic system with interconnected electronics on a flexible substrate, inlet-outlet ports and interface schemes, sensitive and selective detection of cells specificity, and processing of cell counting at polymer interfaces in a microscale biosensor for use in the detection of target biological and non-biological cells. The development of fluidic channels, planar fluidic contact ports, integrated metal electrodes on a flexible substrate for impedance measurements, and a surface modification plasma treatment as an intermediate bonding layer are all part of the fabrication process. Magnetron DC sputtering is used to deposit a double metal layer (Ti/Pt) over the polypropylene film. Using a photoresist layer, specified and etched zones are established. Small fluid volumes, a reduced detection region, and electrical impedance measurements over a range of frequencies for cell counts improve detection sensitivity and specificity. The procedure involves continuous flow of fluid samples that contain particles of interest through the microfluidic channels, counting all types of particles in a portion of the sample using the electrical differential counter to generate a bipolar pulse for each passing cell—calculating the total number of particles of interest originally in the fluid sample by using MATLAB program and signal processing. It's indeed potential to develop a robust and economical kit for cell counting in whole-blood samples using these methods and similar devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=impedance" title="impedance">impedance</a>, <a href="https://publications.waset.org/abstracts/search?q=biochip" title=" biochip"> biochip</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20counting" title=" cell counting"> cell counting</a>, <a href="https://publications.waset.org/abstracts/search?q=microfluidics" title=" microfluidics"> microfluidics</a> </p> <a href="https://publications.waset.org/abstracts/142607/microfluidic-impedimetric-biochip-and-related-methods-for-measurement-chip-manufacture-and-counting-cells" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142607.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7293</span> A Comparison of YOLO Family for Apple Detection and Counting in Orchards</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuanqing%20Li">Yuanqing Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Changyi%20Lei"> Changyi Lei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhaopeng%20Xue"> Zhaopeng Xue</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhuo%20Zheng"> Zhuo Zheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanbo%20Long"> Yanbo Long</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In agricultural production and breeding, implementing automatic picking robot in orchard farming to reduce human labour and error is challenging. The core function of it is automatic identification based on machine vision. This paper focuses on apple detection and counting in orchards and implements several deep learning methods. Extensive datasets are used and a semi-automatic annotation method is proposed. The proposed deep learning models are in state-of-the-art YOLO family. In view of the essence of the models with various backbones, a multi-dimensional comparison in details is made in terms of counting accuracy, mAP and model memory, laying the foundation for realising automatic precision agriculture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=agricultural%20object%20detection" title="agricultural object detection">agricultural object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLO%20family" title=" YOLO family"> YOLO family</a> </p> <a href="https://publications.waset.org/abstracts/134964/a-comparison-of-yolo-family-for-apple-detection-and-counting-in-orchards" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134964.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7292</span> Multiple-Channel Coulter Counter for Cell Sizing and Enumeration </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yu%20Chen">Yu Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Seong-Jin%20Kim"> Seong-Jin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaehoon%20Chung"> Jaehoon Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High throughput cells counting and sizing are often required for biomedical applications. Here we report design, fabrication and validating of a micro-machined Coulter counter device with multiple-channel to realize such application for low cost. Multiple vertical through-holes were fabricated on a silicon chip, combined with the PDMS micro-fluidics channel that serves as the sensing channel. In order to avoid the crosstalk introduced by the electrical connection, instead of measuring the current passing through, the potential of each channel is monitored, thus the high throughput is possible. A peak of the output potential can be captured when the cell/particle is passing through the microhole. The device was validated by counting and sizing the polystyrene beads with diameter of 6 μm, 10 μm and 15 μm. With the sampling frequency to be set at 100 kHz, up to 5000 counts/sec for each channel can be realized. The counting and enumeration of MCF7 cancer cells are also demonstrated. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Coulter%20counter" title="Coulter counter">Coulter counter</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20enumeration" title=" cell enumeration"> cell enumeration</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20through-put" title=" high through-put"> high through-put</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20sizing" title=" cell sizing"> cell sizing</a> </p> <a href="https://publications.waset.org/abstracts/12788/multiple-channel-coulter-counter-for-cell-sizing-and-enumeration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12788.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7291</span> The Effect of Fetal Movement Counting on Maternal Antenatal Attachment </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Esra%20G%C3%BCney">Esra Güney</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuba%20U%C3%A7ar"> Tuba Uçar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aim: This study has been conducted for the purpose of determining the effects of fetal movement counting on antenatal maternal attachment. Material and Method: This research was conducted on the basis of the real test model with the pre-test /post-test control groups. The study population consists of pregnant women registered in the six different Family Health Centers located in the central Malatya districts of Yeşilyurt and Battalgazi. When power analysis is done, the sample size was calculated for each group of at least 55 pregnant women (55 tests, 55 controls). The data were collected by using Personal Information Form and MAAS (Maternal Antenatal Attachment Scale) between July 2015-June 2016. Fetal movement counting training was given to pregnant women by researchers in the experimental group after the pre-test data collection. No intervention was applied to the control group. Post-test data for both groups were collected after four weeks. Data were evaluated with percentage, chi-square arithmetic average, chi-square test and as for the dependent and independent group’s t test. Result: In the MAAS, the pre-test average of total scores in the experimental group is 70.78±6.78, control group is also 71.58±7.54 and so there was no significant difference in mean scores between the two groups (p>0.05). MAAS post-test average of total scores in the experimental group is 78.41±6.65, control group is also is 72.25±7.16 and so the mean scores between groups were found to have statistically significant difference (p<0.05). Conclusion: It was determined that fetal movement counting increases the maternal antenatal attachments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=antenatal%20maternal%20attachment" title="antenatal maternal attachment">antenatal maternal attachment</a>, <a href="https://publications.waset.org/abstracts/search?q=fetal%20movement%20counting" title=" fetal movement counting"> fetal movement counting</a>, <a href="https://publications.waset.org/abstracts/search?q=pregnancy" title=" pregnancy"> pregnancy</a>, <a href="https://publications.waset.org/abstracts/search?q=midwifery" title=" midwifery"> midwifery</a> </p> <a href="https://publications.waset.org/abstracts/57377/the-effect-of-fetal-movement-counting-on-maternal-antenatal-attachment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57377.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7290</span> Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Derlis%20Gregor">Derlis Gregor</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20Cikel"> Kevin Cikel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mario%20Arzamendia"> Mario Arzamendia</a>, <a href="https://publications.waset.org/abstracts/search?q=Ra%C3%BAl%20Gregor"> Raúl Gregor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20transportation%20system" title="intelligent transportation system">intelligent transportation system</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20couting" title=" vehicle couting"> vehicle couting</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20classification" title=" vehicle classification"> vehicle classification</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/43870/design-and-implementation-of-a-counting-and-differentiation-system-for-vehicles-through-video-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43870.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">323</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7289</span> Box Counting Dimension of the Union L of Trinomial Curves When α ≥ 1</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaoutar%20Lamrini%20Uahabi">Kaoutar Lamrini Uahabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Atounti"> Mohamed Atounti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the present work, we consider one category of curves denoted by L(p, k, r, n). These curves are continuous arcs which are trajectories of roots of the trinomial equation zn = αzk + (1 − α), where z is a complex number, n and k are two integers such that 1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting by L the union of all trinomial curves L(p, k, r, n) and using the box counting dimension as fractal dimension, we will prove that the dimension of L is equal to 3/2. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feasible%20angles" title="feasible angles">feasible angles</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20dimension" title=" fractal dimension"> fractal dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=Minkowski%20sausage" title=" Minkowski sausage"> Minkowski sausage</a>, <a href="https://publications.waset.org/abstracts/search?q=trinomial%20curves" title=" trinomial curves"> trinomial curves</a>, <a href="https://publications.waset.org/abstracts/search?q=trinomial%20equation" title=" trinomial equation"> trinomial equation</a> </p> <a href="https://publications.waset.org/abstracts/87207/box-counting-dimension-of-the-union-l-of-trinomial-curves-when-a-1" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7288</span> Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Nouman">Muhammad Nouman</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahad%20Tiwana"> Fahad Tiwana</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Irfan"> Muhammad Irfan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohsin%20Tiwana"> Mohsin Tiwana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=manual%20count" title="manual count">manual count</a>, <a href="https://publications.waset.org/abstracts/search?q=emerging%20data%20sources" title=" emerging data sources"> emerging data sources</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20information%20quality" title=" traffic information quality"> traffic information quality</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20surveillance" title=" traffic surveillance"> traffic surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20counting%20device" title=" traffic counting device"> traffic counting device</a>, <a href="https://publications.waset.org/abstracts/search?q=android%3B%20data%20visualization" title=" android; data visualization"> android; data visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20management" title=" traffic management"> traffic management</a> </p> <a href="https://publications.waset.org/abstracts/101612/design-of-traffic-counting-android-application-with-database-management-system-and-its-comparative-analysis-with-traditional-counting-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101612.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7287</span> A Novel Combined Finger Counting and Finite State Machine Technique for ASL Translation Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rania%20Ahmed%20Kadry%20Abdel%20Gawad%20Birry">Rania Ahmed Kadry Abdel Gawad Birry</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El-Habrouk"> Mohamed El-Habrouk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a brief survey of the techniques used for sign language recognition along with the types of sensors used to perform the task. It presents a modified method for identification of an isolated sign language gesture using Microsoft Kinect with the OpenNI framework. It presents the way of extracting robust features from the depth image provided by Microsoft Kinect and the OpenNI interface and to use them in creating a robust and accurate gesture recognition system, for the purpose of ASL translation. The Prime Sense’s Natural Interaction Technology for End-user - NITE™ - was also used in the C++ implementation of the system. The algorithm presents a simple finger counting algorithm for static signs as well as directional Finite State Machine (FSM) description of the hand motion in order to help in translating a sign language gesture. This includes both letters and numbers performed by a user, which in-turn may be used as an input for voice pronunciation systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20counting" title=" finger counting"> finger counting</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a> </p> <a href="https://publications.waset.org/abstracts/43466/a-novel-combined-finger-counting-and-finite-state-machine-technique-for-asl-translation-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7286</span> Applying Big Data to Understand Urban Design Quality: The Correlation between Social Activities and Automated Pedestrian Counts in Dilworth Park, Philadelphia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae%20Min%20Lee">Jae Min Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Presence of people and intensity of activities have been widely accepted as an indicator for successful public spaces in urban design literature. This study attempts to predict the qualitative indicators, presence of people and intensity of activities, with the quantitative measurements of pedestrian counting. We conducted participant observation in Dilworth Park, Philadelphia to collect the total number of people and activities in the park. Then, the participant observation data is compared with detailed pedestrian counts at 10 exit locations to estimate the number of park users. The study found that there is a clear correlation between the intensity of social activities and automated pedestrian counts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20pedestrian%20count" title="automated pedestrian count">automated pedestrian count</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20space" title=" public space"> public space</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20design" title=" urban design"> urban design</a> </p> <a href="https://publications.waset.org/abstracts/65013/applying-big-data-to-understand-urban-design-quality-the-correlation-between-social-activities-and-automated-pedestrian-counts-in-dilworth-park-philadelphia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65013.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7285</span> Labyrinth Fractal on a Convex Quadrilateral</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Harsha%20Gopalakrishnan">Harsha Gopalakrishnan</a>, <a href="https://publications.waset.org/abstracts/search?q=Srijanani%20Anurag%20Prasad"> Srijanani Anurag Prasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Quadrilateral labyrinth fractals are a new type of fractals that are introduced in this paper. They belong to a unique class of fractals on any plane quadrilateral. The previously researched labyrinth fractals on the unit square and triangle inspire this form of fractal. This work describes how to construct a quadrilateral labyrinth fractal and looks at the circumstances in which it can be understood as the attractor of an iterated function system. Furthermore, some of its topological properties and the Hausdorff and box-counting dimensions of the quadrilateral labyrinth fractals are studied. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractals" title="fractals">fractals</a>, <a href="https://publications.waset.org/abstracts/search?q=labyrinth%20fractals" title=" labyrinth fractals"> labyrinth fractals</a>, <a href="https://publications.waset.org/abstracts/search?q=dendrites" title=" dendrites"> dendrites</a>, <a href="https://publications.waset.org/abstracts/search?q=iterated%20function%20system" title=" iterated function system"> iterated function system</a>, <a href="https://publications.waset.org/abstracts/search?q=Haus-Dorff%20dimension" title=" Haus-Dorff dimension"> Haus-Dorff dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=box-counting%20dimension" title=" box-counting dimension"> box-counting dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=non-self%20similar" title=" non-self similar"> non-self similar</a>, <a href="https://publications.waset.org/abstracts/search?q=non-self%20affine" title=" non-self affine"> non-self affine</a>, <a href="https://publications.waset.org/abstracts/search?q=connected" title=" connected"> connected</a>, <a href="https://publications.waset.org/abstracts/search?q=path%20connected" title=" path connected"> path connected</a> </p> <a href="https://publications.waset.org/abstracts/174613/labyrinth-fractal-on-a-convex-quadrilateral" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174613.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7284</span> Development of Real Time System for Human Detection and Localization from Unmanned Aerial Vehicle Using Optical and Thermal Sensor and Visualization on Geographic Information Systems Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nemi%20Bhattarai">Nemi Bhattarai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, there has been a rapid increase in the use of Unmanned Aerial Vehicle (UAVs) in search and rescue (SAR) operations, disaster management, and many more areas where information about the location of human beings are important. This research will primarily focus on the use of optical and thermal camera via UAV platform in real-time detection, localization, and visualization of human beings on GIS. This research will be beneficial in disaster management search of lost humans in wilderness or difficult terrain, detecting abnormal human behaviors in border or security tight areas, studying distribution of people at night, counting people density in crowd, manage people flow during evacuation, planning provisions in areas with high human density and many more. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=UAV" title="UAV">UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20detection" title=" human detection"> human detection</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=haar-like" title=" haar-like"> haar-like</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS" title=" GIS"> GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20sensor" title=" thermal sensor "> thermal sensor </a> </p> <a href="https://publications.waset.org/abstracts/81472/development-of-real-time-system-for-human-detection-and-localization-from-unmanned-aerial-vehicle-using-optical-and-thermal-sensor-and-visualization-on-geographic-information-systems-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">466</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7283</span> A Note on the Fractal Dimension of Mandelbrot Set and Julia Sets in Misiurewicz Points</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=O.%20Boussoufi">O. Boussoufi</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Lamrini%20Uahabi"> K. Lamrini Uahabi</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Atounti"> M. Atounti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main purpose of this paper is to calculate the fractal dimension of some Julia Sets and Mandelbrot Set in the Misiurewicz Points. Using Matlab to generate the Julia Sets images that match the Misiurewicz points and using a Fractal software, we were able to find different measures that characterize those fractals in textures and other features. We are actually focusing on fractal dimension and the error calculated by the software. When executing the given equation of regression or the log-log slope of image a Box Counting method is applied to the entire image, and chosen settings are available in a FracLAc Program. Finally, a comparison is done for each image corresponding to the area (boundary) where Misiurewicz Point is located. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=box%20counting" title="box counting">box counting</a>, <a href="https://publications.waset.org/abstracts/search?q=FracLac" title=" FracLac"> FracLac</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20dimension" title=" fractal dimension"> fractal dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=Julia%20Sets" title=" Julia Sets"> Julia Sets</a>, <a href="https://publications.waset.org/abstracts/search?q=Mandelbrot%20Set" title=" Mandelbrot Set"> Mandelbrot Set</a>, <a href="https://publications.waset.org/abstracts/search?q=Misiurewicz%20Points" title=" Misiurewicz Points"> Misiurewicz Points</a> </p> <a href="https://publications.waset.org/abstracts/88210/a-note-on-the-fractal-dimension-of-mandelbrot-set-and-julia-sets-in-misiurewicz-points" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88210.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">216</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7282</span> Application of Pattern Recognition Technique to the Quality Characterization of Superficial Microstructures in Steel Coatings</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Gonzalez-Rivera">H. Gonzalez-Rivera</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20L.%20Palmeros-Torres"> J. L. Palmeros-Torres</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes the application of traditional computer vision techniques as a procedure for automatic measurement of the secondary dendrite arm spacing (SDAS) from microscopic images. The algorithm is capable of finding the lineal or curve-shaped secondary column of the main microstructure, measuring its length size in a micro-meter and counting the number of spaces between dendrites. The automatic characterization was compared with a set of 1728 manually characterized images, leading to an accuracy of −0.27 µm for the length size determination and a precision of ± 2.78 counts for dendrite spacing counting, also reducing the characterization time from 7 hours to 2 minutes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dendrite%20arm%20spacing" title="dendrite arm spacing">dendrite arm spacing</a>, <a href="https://publications.waset.org/abstracts/search?q=microstructure%20inspection" title=" microstructure inspection"> microstructure inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=polynomial%20regression" title=" polynomial regression"> polynomial regression</a> </p> <a href="https://publications.waset.org/abstracts/184692/application-of-pattern-recognition-technique-to-the-quality-characterization-of-superficial-microstructures-in-steel-coatings" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">46</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7281</span> Immature Palm Tree Detection Using Morphological Filter for Palm Counting with High Resolution Satellite Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nur%20Nadhirah%20Rusyda%20Rosnan">Nur Nadhirah Rusyda Rosnan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nursuhaili%20Najwa%20Masrol"> Nursuhaili Najwa Masrol</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Fatiha%20MD%20Nor"> Nurul Fatiha MD Nor</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Zafrullah%20Mohammad%20Salim"> Mohammad Zafrullah Mohammad Salim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sim%20Choon%20Cheak"> Sim Choon Cheak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate inventories of oil palm planted areas are crucial for plantation management as this would impact the overall economy and production of oil. One of the technological advancements in the oil palm industry is semi-automated palm counting, which is replacing conventional manual palm counting via digitizing aerial imagery. Most of the semi-automated palm counting method that has been developed was limited to mature palms due to their ideal canopy size represented by satellite image. Therefore, immature palms were often left out since the size of the canopy is barely visible from satellite images. In this paper, an approach using a morphological filter and high-resolution satellite image is proposed to detect immature palm trees. This approach makes it possible to count the number of immature oil palm trees. The method begins with an erosion filter with an appropriate window size of 3m onto the high-resolution satellite image. The eroded image was further segmented using watershed segmentation to delineate immature palm tree regions. Then, local minimum detection was used because it is hypothesized that immature oil palm trees are located at the local minimum within an oil palm field setting in a grayscale image. The detection points generated from the local minimum are displaced to the center of the immature oil palm region and thinned. Only one detection point is left that represents a tree. The performance of the proposed method was evaluated on three subsets with slopes ranging from 0 to 20° and different planting designs, i.e., straight and terrace. The proposed method was able to achieve up to more than 90% accuracy when compared with the ground truth, with an overall F-measure score of up to 0.91. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=immature%20palm%20count" title="immature palm count">immature palm count</a>, <a href="https://publications.waset.org/abstracts/search?q=oil%20palm" title=" oil palm"> oil palm</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a> </p> <a href="https://publications.waset.org/abstracts/175726/immature-palm-tree-detection-using-morphological-filter-for-palm-counting-with-high-resolution-satellite-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175726.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7280</span> Sentiment Classification Using Enhanced Contextual Valence Shifters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vo%20Ngoc%20Phu">Vo Ngoc Phu</a>, <a href="https://publications.waset.org/abstracts/search?q=Phan%20Thi%20Tuoi"> Phan Thi Tuoi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We have explored different methods of improving the accuracy of sentiment classification. The sentiment orientation of a document can be positive (+), negative (-), or neutral (0). We combine five dictionaries from [2, 3, 4, 5, 6] into the new one with 21137 entries. The new dictionary has many verbs, adverbs, phrases and idioms, that are not in five ones before. The paper shows that our proposed method based on the combination of Term-Counting method and Enhanced Contextual Valence Shifters method has improved the accuracy of sentiment classification. The combined method has accuracy 68.984% on the testing dataset, and 69.224% on the training dataset. All of these methods are implemented to classify the reviews based on our new dictionary and the Internet Movie data set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sentiment%20classification" title="sentiment classification">sentiment classification</a>, <a href="https://publications.waset.org/abstracts/search?q=sentiment%20orientation" title=" sentiment orientation"> sentiment orientation</a>, <a href="https://publications.waset.org/abstracts/search?q=valence%20shifters" title=" valence shifters"> valence shifters</a>, <a href="https://publications.waset.org/abstracts/search?q=contextual" title=" contextual"> contextual</a>, <a href="https://publications.waset.org/abstracts/search?q=valence%20shifters" title=" valence shifters"> valence shifters</a>, <a href="https://publications.waset.org/abstracts/search?q=term%20counting" title=" term counting"> term counting</a> </p> <a href="https://publications.waset.org/abstracts/11410/sentiment-classification-using-enhanced-contextual-valence-shifters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11410.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">505</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7279</span> Distribution of Traffic Volume at Fuel Station during Peak Hour Period on Arterial Road</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Surachai%20Ampawasuvan">Surachai Ampawasuvan</a>, <a href="https://publications.waset.org/abstracts/search?q=Supornchai%20Utainarumol"> Supornchai Utainarumol</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of fuel station’ customers, who drive on the major arterial road wants to use the stations to fill fuel to their vehicle during their journey to destinations. According to the survey of traffic volume of the vehicle using fuel stations by video cameras, automatic counting tools, or questionnaires, it was found that most users prefer to use fuel stations on holiday rather than on working day. They also prefer to use fuel stations in the morning rather than in the evening. When comparing the ratio of the distribution pattern of traffic volume of the vehicle using fuel stations by video cameras, automatic counting tools, there is no significant difference. However, when comparing the ratio of peak hour (peak hour rate) of the results from questionnaires at 13 to 14 percent with the results obtained by using the methods of the Institute of Transportation Engineering (ITE), it is found that the value is similar. However, it is different from a survey by video camera and automatic traffic counting at 6 to 7 percent of about half. So, this study suggests that in order to forecast trip generation of vehicle using fuel stations on major arterial road which is mostly characterized by Though Traffic, it is recommended to use the value of half of peak hour rate, which would make the forecast for trips generation to be more precise and accurate and compatible to surrounding environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=peak%20rate" title="peak rate">peak rate</a>, <a href="https://publications.waset.org/abstracts/search?q=trips%20generation" title=" trips generation"> trips generation</a>, <a href="https://publications.waset.org/abstracts/search?q=fuel%20station" title=" fuel station"> fuel station</a>, <a href="https://publications.waset.org/abstracts/search?q=arterial%20road" title=" arterial road"> arterial road</a> </p> <a href="https://publications.waset.org/abstracts/54537/distribution-of-traffic-volume-at-fuel-station-during-peak-hour-period-on-arterial-road" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54537.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7278</span> Motion-Based Detection and Tracking of Multiple Pedestrians</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Harras">A. Harras</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tsuji"> A. Tsuji</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tracking of moving people has gained a matter of great importance due to rapid technological advancements in the field of computer vision. The objective of this study is to design a motion based detection and tracking multiple walking pedestrians randomly in different directions. In our proposed method, Gaussian mixture model (GMM) is used to determine moving persons in image sequences. It reacts to changes that take place in the scene like different illumination; moving objects start and stop often, etc. Background noise in the scene is eliminated through applying morphological operations and the motions of tracked people which is determined by using the Kalman filter. The Kalman filter is applied to predict the tracked location in each frame and to determine the likelihood of each detection. We used a benchmark data set for the evaluation based on a side wall stationary camera. The actual scenes from the data set are taken on a street including up to eight people in front of the camera in different two scenes, the duration is 53 and 35 seconds, respectively. In the case of walking pedestrians in close proximity, the proposed method has achieved the detection ratio of 87%, and the tracking ratio is 77 % successfully. When they are deferred from each other, the detection ratio is increased to 90% and the tracking ratio is also increased to 79%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20detection" title="automatic detection">automatic detection</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=pedestrians" title=" pedestrians"> pedestrians</a>, <a href="https://publications.waset.org/abstracts/search?q=counting" title=" counting"> counting</a> </p> <a href="https://publications.waset.org/abstracts/82912/motion-based-detection-and-tracking-of-multiple-pedestrians" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">257</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7277</span> Comparative Performance of Standing Whole Body Monitor and Shielded Chair Counter for In-vivo Measurements</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Manohari">M. Manohari</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Priyadharshini"> S. Priyadharshini</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Bajeer%20Sulthan"> K. Bajeer Sulthan</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Santhanam"> R. Santhanam</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Chandrasekaran"> S. Chandrasekaran</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Venkatraman"> B. Venkatraman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In-vivo monitoring facility at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, caters to the monitoring of internal exposure of occupational radiation workers from various radioactive facilities of IGCAR. Internal exposure measurement is done using Na(Tl) based Scintillation detectors. Two types of whole-body counters, namely Shielded Chair Counter (SC) and Standing Whole-Body Monitor (SWBM), are being used. The shielded Chair is based on a NaI detector of 20.3 cm diameter and 10.15 cm thick. The chair of the system is shielded using lead shots of 10 cm lead equivalent and the detector with 8 cm lead bricks. Counting geometry is sitting geometry. Calibration is done using 95 percentile BOMAB phantom. The minimum Detectable Activity (MDA) for 137Cs for the 60s is 1150 Bq. Standing Wholebody monitor (SWBM) has two NaI(Tl) detectors of size 10.16 x 10.16 x 40.64 cm3 positioned serially, one over the other. It has a shielding thickness of 5cm lead equivalent. Counting is done in standup geometry. Calibration is done with the help of Ortec Phantom, having a uniform distribution of mixed radionuclides for the thyroid, thorax and pelvis. The efficiency of SWBM is 2.4 to 3.5 times higher than that of the shielded chair in the energy range of 279 to 1332 keV. MDA of 250 Bq for 137Cs can be achieved with a counting time of 60s. MDA for 131I in the thyroid was estimated as 100 Bq from the MDA of whole-body for one-day post intake. Standing whole body monitor is better in terms of efficiency, MDA and ease of positioning. In case of emergency situations, the optimal MDAs for in-vivo monitoring service are 1000 Bq for 137Cs and 100 Bq for 131I. Hence, SWBM is more suitable for the rapid screening of workers as well as the public in the case of an emergency. While a person reports for counting, there is a potential for external contamination. In SWBM, there is a feasibility to discriminate them as the subject can be counted in anterior or posterior geometry which is not possible in SC. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=minimum%20detectable%20activity" title="minimum detectable activity">minimum detectable activity</a>, <a href="https://publications.waset.org/abstracts/search?q=shielded%20chair" title=" shielded chair"> shielded chair</a>, <a href="https://publications.waset.org/abstracts/search?q=shielding%20thickness" title=" shielding thickness"> shielding thickness</a>, <a href="https://publications.waset.org/abstracts/search?q=standing%20whole%20body%20monitor" title=" standing whole body monitor"> standing whole body monitor</a> </p> <a href="https://publications.waset.org/abstracts/185279/comparative-performance-of-standing-whole-body-monitor-and-shielded-chair-counter-for-in-vivo-measurements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185279.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">46</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7276</span> Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Visetpotjanakit">S. Visetpotjanakit</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Khrautongkieo"> C. Khrautongkieo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=international%20atomic%20energy%20agency" title="international atomic energy agency">international atomic energy agency</a>, <a href="https://publications.waset.org/abstracts/search?q=proficiency%20test" title=" proficiency test"> proficiency test</a>, <a href="https://publications.waset.org/abstracts/search?q=radiation%20monitoring" title=" radiation monitoring"> radiation monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=seawater" title=" seawater"> seawater</a> </p> <a href="https://publications.waset.org/abstracts/93787/participation-in-iaea-proficiency-test-to-analyse-cobalt-strontium-and-caesium-in-seawater-using-direct-counting-and-radiochemical-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93787.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">172</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7275</span> Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jean-Francois%20Rajotte">Jean-Francois Rajotte</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Sotir"> Martin Sotir</a>, <a href="https://publications.waset.org/abstracts/search?q=Frank%20Gouineau"> Frank Gouineau</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20monitoring" title="traffic monitoring">traffic monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20annotation" title=" image annotation"> image annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicles" title=" vehicles"> vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=roads" title=" roads"> roads</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20systems" title=" real-time systems"> real-time systems</a> </p> <a href="https://publications.waset.org/abstracts/82867/deep-learning-based-object-detection-on-low-quality-images-a-case-study-of-real-time-traffic-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82867.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7274</span> iCount: An Automated Swine Detection and Production Monitoring System Based on Sobel Filter and Ellipse Fitting Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jocelyn%20B.%20Barbosa">Jocelyn B. Barbosa</a>, <a href="https://publications.waset.org/abstracts/search?q=Angeli%20L.%20Magbaril"> Angeli L. Magbaril</a>, <a href="https://publications.waset.org/abstracts/search?q=Mariel%20T.%20Sabanal"> Mariel T. Sabanal</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20Paul%20T.%20Galario"> John Paul T. Galario</a>, <a href="https://publications.waset.org/abstracts/search?q=Mikka%20P.%20Baldovino"> Mikka P. Baldovino</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of technology has become ubiquitous in different areas of business today. With the advent of digital imaging and database technology, business owners have been motivated to integrate technology to their business operation ranging from small, medium to large enterprises. Technology has been found to have brought many benefits that can make a business grow. Hog or swine raising, for example, is a very popular enterprise in the Philippines, whose challenges in production monitoring can be addressed through technology integration. Swine production monitoring can become a tedious task as the enterprise goes larger. Specifically, problems like delayed and inconsistent reports are most likely to happen if counting of swine per pen of which building is done manually. In this study, we present iCount, which aims to ensure efficient swine detection and counting that hastens the swine production monitoring task. We develop a system that automatically detects and counts swine based on Sobel filter and ellipse fitting model, given the still photos of the group of swine captured in a pen. We improve the Sobel filter detection result through 8-neigbhorhood rule implementation. Ellipse fitting technique is then employed for proper swine detection. Furthermore, the system can generate periodic production reports and can identify the specific consumables to be served to the swine according to schedules. Experiments reveal that our algorithm provides an efficient way for detecting swine, thereby providing a significant amount of accuracy in production monitoring. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20swine%20counting" title="automatic swine counting">automatic swine counting</a>, <a href="https://publications.waset.org/abstracts/search?q=swine%20detection" title=" swine detection"> swine detection</a>, <a href="https://publications.waset.org/abstracts/search?q=swine%20production%20monitoring" title=" swine production monitoring"> swine production monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=ellipse%20fitting%20model" title=" ellipse fitting model"> ellipse fitting model</a>, <a href="https://publications.waset.org/abstracts/search?q=sobel%20filter" title=" sobel filter"> sobel filter</a> </p> <a href="https://publications.waset.org/abstracts/62074/icount-an-automated-swine-detection-and-production-monitoring-system-based-on-sobel-filter-and-ellipse-fitting-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62074.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">311</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=243">243</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=244">244</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=counting%20people&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>