CINXE.COM
Search results for: vehicle detection
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: vehicle detection</title> <meta name="description" content="Search results for: vehicle detection"> <meta name="keywords" content="vehicle detection"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="vehicle detection" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="vehicle detection"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4758</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: vehicle detection</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4758</span> Automatic Vehicle Detection Using Circular Synthetic Aperture Radar Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leping%20Chen">Leping Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Daoxiang%20An"> Daoxiang An</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaotao%20Huang"> Xiaotao Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic vehicle detection using synthetic aperture radar (SAR) image has been widely researched, as well as using optical remote sensing images. However, most researches treat the detection as an independent problem, failing to make full use of SAR data information. In circular SAR (CSAR), the two long borders of vehicle will shrink if the imaging surface is set higher than the reference one. Based on above variance, an automatic vehicle detection using CSAR image is proposed to enhance detection ability under complex environment, such as vehicles’ closely packing, which confuses the detector. The detection method uses the multiple images generated by different height plane to obtain an energy-concentrated image for detecting and then uses the maximally stable extremal regions method (MSER) to detect vehicles. A result of vehicles’ detection is given to verify the effectiveness and correctness of proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=circular%20SAR" title="circular SAR">circular SAR</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic" title=" automatic"> automatic</a>, <a href="https://publications.waset.org/abstracts/search?q=imaging" title=" imaging"> imaging</a> </p> <a href="https://publications.waset.org/abstracts/84548/automatic-vehicle-detection-using-circular-synthetic-aperture-radar-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4757</span> Vehicle Detection and Tracking Using Deep Learning Techniques in Surveillance Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abe%20D.%20Desta">Abe D. Desta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study suggests a deep learning-based method for identifying and following moving objects in surveillance video. The proposed method uses a fast regional convolution neural network (F-RCNN) trained on a substantial dataset of vehicle images to first detect vehicles. A Kalman filter and a data association technique based on a Hungarian algorithm are then used to monitor the observed vehicles throughout time. However, in general, F-RCNN algorithms have been shown to be effective in achieving high detection accuracy and robustness in this research study. For example, in one study The study has shown that the vehicle detection and tracking, the system was able to achieve an accuracy of 97.4%. In this study, the F-RCNN algorithm was compared to other popular object detection algorithms and was found to outperform them in terms of both detection accuracy and speed. The presented system, which has application potential in actual surveillance systems, shows the usefulness of deep learning approaches in vehicle detection and tracking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-regional%20convolutional%20neural%20networks" title=" fast-regional convolutional neural networks"> fast-regional convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20tracking" title=" vehicle tracking"> vehicle tracking</a> </p> <a href="https://publications.waset.org/abstracts/164803/vehicle-detection-and-tracking-using-deep-learning-techniques-in-surveillance-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164803.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4756</span> Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiao%20Chen">Xiao Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoying%20Kong"> Xiaoying Kong</a>, <a href="https://publications.waset.org/abstracts/search?q=Min%20Xu"> Min Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vehicle%20classification" title="vehicle classification">vehicle classification</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20traffic%20model" title=" road traffic model"> road traffic model</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetic%20sensing" title=" magnetic sensing"> magnetic sensing</a> </p> <a href="https://publications.waset.org/abstracts/86644/road-vehicle-recognition-using-magnetic-sensing-feature-extraction-and-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4755</span> A Background Subtraction Based Moving Object Detection Around the Host Vehicle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Lim">Hyojin Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuong%20Nguyen%20Khac"> Cuong Nguyen Khac</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose moving object detection method which is helpful for driver to safely take his/her car out of parking lot. When moving objects such as motorbikes, pedestrians, the other cars and some obstacles are detected at the rear-side of host vehicle, the proposed algorithm can provide to driver warning. We assume that the host vehicle is just before departure. Gaussian Mixture Model (GMM) based background subtraction is basically applied. Pre-processing such as smoothing and post-processing as morphological filtering are added.We examine “which color space has better performance for detection of moving objects?” Three color spaces including RGB, YCbCr, and Y are applied and compared, in terms of detection rate. Through simulation, we prove that RGB space is more suitable for moving object detection based on background subtraction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gaussian%20mixture%20model" title="gaussian mixture model">gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title=" background subtraction"> background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title=" moving object detection"> moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20space" title=" color space"> color space</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20filtering" title=" morphological filtering"> morphological filtering</a> </p> <a href="https://publications.waset.org/abstracts/32650/a-background-subtraction-based-moving-object-detection-around-the-host-vehicle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">617</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4754</span> Parking Space Detection and Trajectory Tracking Control for Vehicle Auto-Parking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiuh-Jer%20Huang">Shiuh-Jer Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Sheng%20Hsu"> Yu-Sheng Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> On-board available parking space detecting system, parking trajectory planning and tracking control mechanism are the key components of vehicle backward auto-parking system. Firstly, pair of ultrasonic sensors is installed on each side of vehicle body surface to detect the relative distance between ego-car and surrounding obstacle. The dimension of a found empty space can be calculated based on vehicle speed and the time history of ultrasonic sensor detecting information. This result can be used for constructing the 2D vehicle environmental map and available parking type judgment. Finally, the auto-parking controller executes the on-line optimal parking trajectory planning based on this 2D environmental map, and monitors the real-time vehicle parking trajectory tracking control. This low cost auto-parking system was tested on a model car. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vehicle%20auto-parking" title="vehicle auto-parking">vehicle auto-parking</a>, <a href="https://publications.waset.org/abstracts/search?q=parking%20space%20detection" title=" parking space detection"> parking space detection</a>, <a href="https://publications.waset.org/abstracts/search?q=parking%20path%20tracking%20control" title=" parking path tracking control"> parking path tracking control</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20fuzzy%20controller" title=" intelligent fuzzy controller"> intelligent fuzzy controller</a> </p> <a href="https://publications.waset.org/abstracts/78571/parking-space-detection-and-trajectory-tracking-control-for-vehicle-auto-parking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78571.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">244</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4753</span> Multi-Vehicle Detection Using Histogram of Oriented Gradients Features and Adaptive Sliding Window Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saumya%20Srivastava">Saumya Srivastava</a>, <a href="https://publications.waset.org/abstracts/search?q=Rina%20Maiti"> Rina Maiti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to achieve a better performance of vehicle detection in a complex environment, we present an efficient approach for a multi-vehicle detection system using an adaptive sliding window technique. For a given frame, image segmentation is carried out to establish the region of interest. Gradient computation followed by thresholding, denoising, and morphological operations is performed to extract the binary search image. Near-region field and far-region field are defined to generate hypotheses using the adaptive sliding window technique on the resultant binary search image. For each vehicle candidate, features are extracted using a histogram of oriented gradients, and a pre-trained support vector machine is applied for hypothesis verification. Later, the Kalman filter is used for tracking the vanishing point. The experimental results show that the method is robust and effective on various roads and driving scenarios. The algorithm was tested on highways and urban roads in India. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gradient" title="gradient">gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=histograms%20of%20oriented%20gradients" title=" histograms of oriented gradients"> histograms of oriented gradients</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/156497/multi-vehicle-detection-using-histogram-of-oriented-gradients-features-and-adaptive-sliding-window-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4752</span> Analysis of Collision Avoidance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Gayathri%20Devi">N. Gayathri Devi</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Batri"> K. Batri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The advent of technology has increased the traffic hazards and the road accidents take place. Collision detection system in automobile aims at reducing or mitigating the severity of an accident. This project aims at avoiding Vehicle head on collision by means of collision detection algorithm. This collision detection algorithm predicts the collision and the avoidance or minimization have to be done within few seconds on confirmation. Under critical situation collision minimization is made possible by turning the vehicle to the desired turn radius so that collision impact can be reduced. In order to avoid the collision completely, the turning of the vehicle should be achieved at reduced speed in order to maintain the stability. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=collision%20avoidance%20system" title="collision avoidance system">collision avoidance system</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20to%20collision" title=" time to collision"> time to collision</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20to%20turn" title=" time to turn"> time to turn</a>, <a href="https://publications.waset.org/abstracts/search?q=turn%20radius" title=" turn radius"> turn radius</a> </p> <a href="https://publications.waset.org/abstracts/30106/analysis-of-collision-avoidance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30106.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">549</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4751</span> Autonomous Vehicle Detection and Classification in High Resolution Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20J.%20Ghandour">Ali J. Ghandour</a>, <a href="https://publications.waset.org/abstracts/search?q=Houssam%20A.%20Krayem"> Houssam A. Krayem</a>, <a href="https://publications.waset.org/abstracts/search?q=Abedelkarim%20A.%20Jezzini"> Abedelkarim A. Jezzini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High-resolution satellite images and remote sensing can provide global information in a fast way compared to traditional methods of data collection. Under such high resolution, a road is not a thin line anymore. Objects such as cars and trees are easily identifiable. Automatic vehicles enumeration can be considered one of the most important applications in traffic management. In this paper, autonomous vehicle detection and classification approach in highway environment is proposed. This approach consists mainly of three stages: (i) first, a set of preprocessing operations are applied including soil, vegetation, water suppression. (ii) Then, road networks detection and delineation is implemented using built-up area index, followed by several morphological operations. This step plays an important role in increasing the overall detection accuracy since vehicles candidates are objects contained within the road networks only. (iii) Multi-level Otsu segmentation is implemented in the last stage, resulting in vehicle detection and classification, where detected vehicles are classified into cars and trucks. Accuracy assessment analysis is conducted over different study areas to show the great efficiency of the proposed method, especially in highway environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20identification" title=" object identification"> object identification</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20and%20road%20extraction" title=" vehicle and road extraction"> vehicle and road extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20and%20road%20features-based%20classification" title=" vehicle and road features-based classification"> vehicle and road features-based classification</a> </p> <a href="https://publications.waset.org/abstracts/86230/autonomous-vehicle-detection-and-classification-in-high-resolution-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4750</span> Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Derlis%20Gregor">Derlis Gregor</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20Cikel"> Kevin Cikel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mario%20Arzamendia"> Mario Arzamendia</a>, <a href="https://publications.waset.org/abstracts/search?q=Ra%C3%BAl%20Gregor"> Raúl Gregor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20transportation%20system" title="intelligent transportation system">intelligent transportation system</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20couting" title=" vehicle couting"> vehicle couting</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20classification" title=" vehicle classification"> vehicle classification</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/43870/design-and-implementation-of-a-counting-and-differentiation-system-for-vehicles-through-video-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43870.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4749</span> Location Detection of Vehicular Accident Using Global Navigation Satellite Systems/Inertial Measurement Units Navigator </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neda%20Navidi">Neda Navidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Rene%20Jr.%20Landry"> Rene Jr. Landry</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vehicle tracking and accident recognizing are considered by many industries like insurance and vehicle rental companies. The main goal of this paper is to detect the location of a car accident by combining different methods. The methods, which are considered in this paper, are Global Navigation Satellite Systems/Inertial Measurement Units (GNSS/IMU)-based navigation and vehicle accident detection algorithms. They are expressed by a set of raw measurements, which are obtained from a designed integrator black box using GNSS and inertial sensors. Another concern of this paper is the definition of accident detection algorithm based on its jerk to identify the position of that accident. In fact, the results convinced us that, even in GNSS blockage areas, the position of the accident could be detected by GNSS/INS integration with 50% improvement compared to GNSS stand alone. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=driver%20behavior%20monitoring" title="driver behavior monitoring">driver behavior monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=integration" title=" integration"> integration</a>, <a href="https://publications.waset.org/abstracts/search?q=IMU" title=" IMU"> IMU</a>, <a href="https://publications.waset.org/abstracts/search?q=GNSS" title=" GNSS"> GNSS</a>, <a href="https://publications.waset.org/abstracts/search?q=monitoring" title=" monitoring"> monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/72798/location-detection-of-vehicular-accident-using-global-navigation-satellite-systemsinertial-measurement-units-navigator" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72798.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">234</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4748</span> Hazardous Vegetation Detection in Right-Of-Way Power Transmission Lines in Brazil Using Unmanned Aerial Vehicle and Light Detection and Ranging</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mauricio%20George%20Miguel%20Jardini">Mauricio George Miguel Jardini</a>, <a href="https://publications.waset.org/abstracts/search?q=Jose%20Antonio%20Jardini"> Jose Antonio Jardini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Transmission power utilities participate with kilometers of circuits, many with particularities in terms of vegetation growth. To control these rights-of-way, maintenance teams perform ground, and air inspections, and the identification method is subjective (indirect). On a ground inspection, when identifying an irregularity, for example, high vegetation threatening contact with the conductor cable, pruning or suppression is performed immediately. In an aerial inspection, the suppression team is mobilized to the identified point. This work investigates the use of 3D modeling of a transmission line segment using RGB (red, blue, and green) images and LiDAR (Light Detection and Ranging) sensor data. Both sensors are coupled to unmanned aerial vehicle. The goal is the accurate and timely detection of vegetation along the right-of-way that can cause shutdowns. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20modeling" title="3D modeling">3D modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=right-of-way" title=" right-of-way"> right-of-way</a>, <a href="https://publications.waset.org/abstracts/search?q=transmission%20lines" title=" transmission lines"> transmission lines</a>, <a href="https://publications.waset.org/abstracts/search?q=vegetation" title=" vegetation"> vegetation</a> </p> <a href="https://publications.waset.org/abstracts/126372/hazardous-vegetation-detection-in-right-of-way-power-transmission-lines-in-brazil-using-unmanned-aerial-vehicle-and-light-detection-and-ranging" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126372.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4747</span> Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jia%20Li">Jia Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Xing%20Wei"> Xing Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuchen%20Hong"> Yuchen Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Yang%20Lu"> Yang Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vehicle%20behavior%20detection" title="vehicle behavior detection">vehicle behavior detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/112029/vehicle-timing-motion-detection-based-on-multi-dimensional-dynamic-detection-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112029.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4746</span> Comparative Analysis of Edge Detection Techniques for Extracting Characters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rana%20Gill">Rana Gill</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandandeep%20Kaur"> Chandandeep Kaur </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Segmentation of images can be implemented using different fundamental algorithms like edge detection (discontinuity based segmentation), region growing (similarity based segmentation), iterative thresholding method. A comprehensive literature review relevant to the study gives description of different techniques for vehicle number plate detection and edge detection techniques widely used on different types of images. This research work is based on edge detection techniques and calculating threshold on the basis of five edge operators. Five operators used are Prewitt, Roberts, Sobel, LoG and Canny. Segmentation of characters present in different type of images like vehicle number plate, name plate of house and characters on different sign boards are selected as a case study in this work. The proposed methodology has seven stages. The proposed system has been implemented using MATLAB R2010a. Comparison of all the five operators has been done on the basis of their performance. From the results it is found that Canny operators produce best results among the used operators and performance of different edge operators in decreasing order is: Canny>Log>Sobel>Prewitt>Roberts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text" title=" text"> text</a>, <a href="https://publications.waset.org/abstracts/search?q=extracting%20characters" title=" extracting characters"> extracting characters</a> </p> <a href="https://publications.waset.org/abstracts/9054/comparative-analysis-of-edge-detection-techniques-for-extracting-characters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9054.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">426</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4745</span> A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michael%20Moeti">Michael Moeti</a>, <a href="https://publications.waset.org/abstracts/search?q=Khuliso%20Sigama"> Khuliso Sigama</a>, <a href="https://publications.waset.org/abstracts/search?q=Thapelo%20Samuel%20Matlala"> Thapelo Samuel Matlala</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=location%20identification" title=" location identification"> location identification</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=GSM" title=" GSM"> GSM</a> </p> <a href="https://publications.waset.org/abstracts/154066/a-convolutional-neural-network-based-vehicle-theft-detection-location-and-reporting-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4744</span> Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Zhang">Qiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojian%20Hu"> Xiaojian Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time" title="real-time">real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-vehicle%20tracking" title=" multi-vehicle tracking"> multi-vehicle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20attribution" title=" color attribution"> color attribution</a> </p> <a href="https://publications.waset.org/abstracts/136438/real-time-multi-vehicle-tracking-application-at-intersections-based-on-feature-selection-in-combination-with-color-attribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4743</span> Iris Detection on RGB Image for Controlling Side Mirror</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Norzalina%20Othman">Norzalina Othman</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Na%E2%80%99imy%20Wan"> Nurul Na’imy Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Azliza%20Mohd%20Rusli"> Azliza Mohd Rusli</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Noor%20Syahirah%20Meor%20Idris"> Wan Noor Syahirah Meor Idris</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20detection" title="iris detection">iris detection</a>, <a href="https://publications.waset.org/abstracts/search?q=midpoint%20coordinates" title=" midpoint coordinates"> midpoint coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20images" title=" RGB images"> RGB images</a>, <a href="https://publications.waset.org/abstracts/search?q=side%20mirror" title=" side mirror"> side mirror</a> </p> <a href="https://publications.waset.org/abstracts/8133/iris-detection-on-rgb-image-for-controlling-side-mirror" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4742</span> Design and Construction of Vehicle Tracking System with Global Positioning System/Global System for Mobile Communication Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bala%20Adamu%20Malami">Bala Adamu Malami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The necessity of low-cost electronic vehicle/car security designed in coordination with other security measures is always there in our society to reduce the risk of vehicle intrusion. Keeping this problem in mind, we are designing an automatic GPS system which is technology to build an integrated and fully customized vehicle to detect the movement of the vehicle and also serve as a security system at a reasonable cost. Users can locate the vehicle's position via GPS by using the Google Maps application to show vehicle coordinates on a smartphone. The tracking system uses a Global System for Mobile Communication (GSM) modem for communication between the mobile station and the microcontroller to send and receive commands. Further design can be improved to capture the vehicle movement range and alert the vehicle owner when the vehicle is out of range. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electronic" title="electronic">electronic</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=GSM%20modem" title=" GSM modem"> GSM modem</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle" title=" vehicle"> vehicle</a> </p> <a href="https://publications.waset.org/abstracts/159657/design-and-construction-of-vehicle-tracking-system-with-global-positioning-systemglobal-system-for-mobile-communication-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159657.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4741</span> Traffic Density Measurement by Automatic Detection of the Vehicles Using Gradient Vectors from Aerial Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saman%20Ghaffarian">Saman Ghaffarian</a>, <a href="https://publications.waset.org/abstracts/search?q=Ilgin%20G%C3%B6ka%C5%9Far"> Ilgin Gökaşar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a new automatic vehicle detection method from very high resolution aerial images to measure traffic density. The proposed method starts by extracting road regions from image using road vector data. Then, the road image is divided into equal sections considering resolution of the images. Gradient vectors of the road image are computed from edge map of the corresponding image. Gradient vectors on the each boundary of the sections are divided where the gradient vectors significantly change their directions. Finally, number of vehicles in each section is carried out by calculating the standard deviation of the gradient vectors in each group and accepting the group as vehicle that has standard deviation above predefined threshold value. The proposed method was tested in four very high resolution aerial images acquired from Istanbul, Turkey which illustrate roads and vehicles with diverse characteristics. The results show the reliability of the proposed method in detecting vehicles by producing 86% overall F1 accuracy value. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20images" title="aerial images">aerial images</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20transportation%20systems" title=" intelligent transportation systems"> intelligent transportation systems</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20density%20measurement" title=" traffic density measurement"> traffic density measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a> </p> <a href="https://publications.waset.org/abstracts/32312/traffic-density-measurement-by-automatic-detection-of-the-vehicles-using-gradient-vectors-from-aerial-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32312.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">379</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4740</span> Design and Development of an Autonomous Underwater Vehicle for Irrigation Canal Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mamoon%20Masud">Mamoon Masud</a>, <a href="https://publications.waset.org/abstracts/search?q=Suleman%20Mazhar"> Suleman Mazhar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Indus river basin’s irrigation system in Pakistan is extremely complex, spanning over 50,000 km. Maintenance and monitoring of this demands enormous resources. This paper describes the development of a streamlined and low-cost autonomous underwater vehicle (AUV) for the monitoring of irrigation canals including water quality monitoring and water theft detection. The vehicle is a hovering-type AUV, designed mainly for monitoring irrigation canals, with fully documented design and open source code. It has a length of 17 inches, and a radius of 3.5 inches with a depth rating of 5m. Multiple sensors are present onboard the AUV for monitoring water quality parameters including pH, turbidity, total dissolved solids (TDS) and dissolved oxygen. A 9-DOF Inertial Measurement Unit (IMU), GY-85, is used, which incorporates an Accelerometer (ADXL345), a Gyroscope (ITG-3200) and a Magnetometer (HMC5883L). The readings from these sensors are fused together using directional cosine matrix (DCM) algorithm, providing the AUV with the heading angle, while a pressure sensor gives the depth of the AUV. 2 sonar-based range sensors are used for obstacle detection, enabling the vehicle to align itself with the irrigation canals edges. 4 thrusters control the vehicle’s surge, heading and heave, providing 3 DOF. The thrusters are controlled using a proportional-integral-derivative (PID) feedback control system, with heading angle and depth being the controller’s input and the thruster motor speed as the output. A flow sensor has been incorporated to monitor canal water level to detect water-theft event in the irrigation system. In addition to water theft detection, the vehicle also provides information on water quality, providing us with the ability to identify the source(s) of water contamination. Detection of such events can provide useful policy inputs for improving irrigation efficiency and reducing water contamination. The AUV being low cost, small sized and suitable for autonomous maneuvering, water level and quality monitoring in the irrigation canals, can be used for irrigation network monitoring at a large scale. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20autonomous%20underwater%20vehicle" title="the autonomous underwater vehicle">the autonomous underwater vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=irrigation%20canal%20monitoring" title=" irrigation canal monitoring"> irrigation canal monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20quality%20monitoring" title=" water quality monitoring"> water quality monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=underwater%20line%20tracking" title=" underwater line tracking"> underwater line tracking</a> </p> <a href="https://publications.waset.org/abstracts/96861/design-and-development-of-an-autonomous-underwater-vehicle-for-irrigation-canal-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96861.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4739</span> Pyramidal Lucas-Kanade Optical Flow Based Moving Object Detection in Dynamic Scenes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Lim">Hyojin Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuong%20Nguyen%20Khac"> Cuong Nguyen Khac</a>, <a href="https://publications.waset.org/abstracts/search?q=Yeongyu%20Choi"> Yeongyu Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a simple moving object detection, which is based on motion vectors obtained from pyramidal Lucas-Kanade optical flow. The proposed method detects moving objects such as pedestrians, the other vehicles and some obstacles at the front-side of the host vehicle, and it can provide the warning to the driver. Motion vectors are obtained by using pyramidal Lucas-Kanade optical flow, and some outliers are eliminated by comparing the amplitude of each vector with the pre-defined threshold value. The background model is obtained by calculating the mean and the variance of the amplitude of recent motion vectors in the rectangular shaped local region called the cell. The model is applied as the reference to classify motion vectors of moving objects and those of background. Motion vectors are clustered to rectangular regions by using the unsupervised clustering K-means algorithm. Labeling method is applied to label groups which is close to each other, using by distance between each center points of rectangular. Through the simulations tested on four kinds of scenarios such as approaching motorbike, vehicle, and pedestrians to host vehicle, we prove that the proposed is simple but efficient for moving object detection in parking lots. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title="moving object detection">moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20scene" title=" dynamic scene"> dynamic scene</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=pyramidal%20optical%20flow" title=" pyramidal optical flow"> pyramidal optical flow</a> </p> <a href="https://publications.waset.org/abstracts/50958/pyramidal-lucas-kanade-optical-flow-based-moving-object-detection-in-dynamic-scenes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4738</span> A Vehicle Detection and Speed Measurement Algorithm Based on Magnetic Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Panagiotis%20Gkekas">Panagiotis Gkekas</a>, <a href="https://publications.waset.org/abstracts/search?q=Christos%20Sougles"> Christos Sougles</a>, <a href="https://publications.waset.org/abstracts/search?q=Dionysios%20Kehagias"> Dionysios Kehagias</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimitrios%20Tzovaras"> Dimitrios Tzovaras</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cooperative intelligent transport systems (C-ITS) can greatly improve safety and efficiency in road transport by enabling communication, not only between vehicles themselves but also between vehicles and infrastructure. For that reason, traffic surveillance systems on the road are of great importance. This paper focuses on the development of an on-road unit comprising several magnetic sensors for real-time vehicle detection, movement direction, and speed measurement calculations. Magnetic sensors can feel and measure changes in the earth’s magnetic field. Vehicles are composed of many parts with ferromagnetic properties. Depending on sensors’ sensitivity, changes in the earth’s magnetic field caused by passing vehicles can be detected and analyzed in order to extract information on the properties of moving vehicles. In this paper, we present a prototype algorithm for real-time, high-accuracy, vehicle detection, and speed measurement, which can be implemented as a portable, low-cost, and non-invasive to existing infrastructure solution with the potential to replace existing high-cost implementations. The paper describes the algorithm and presents results from its preliminary lab testing in a close to real condition environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=magnetic%20sensors" title="magnetic sensors">magnetic sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speed%20measurement" title=" speed measurement"> speed measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20surveillance%20system" title=" traffic surveillance system"> traffic surveillance system</a> </p> <a href="https://publications.waset.org/abstracts/151555/a-vehicle-detection-and-speed-measurement-algorithm-based-on-magnetic-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151555.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4737</span> Vehicle Speed Estimation Using Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prodipta%20Bhowmik">Prodipta Bhowmik</a>, <a href="https://publications.waset.org/abstracts/search?q=Poulami%20Saha"> Poulami Saha</a>, <a href="https://publications.waset.org/abstracts/search?q=Preety%20Mehra"> Preety Mehra</a>, <a href="https://publications.waset.org/abstracts/search?q=Yogesh%20Soni"> Yogesh Soni</a>, <a href="https://publications.waset.org/abstracts/search?q=Triloki%20Nath%20Jha"> Triloki Nath Jha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In India, the smart city concept is growing day by day. So, for smart city development, a better traffic management and monitoring system is a very important requirement. Nowadays, road accidents increase due to more vehicles on the road. Reckless driving is mainly responsible for a huge number of accidents. So, an efficient traffic management system is required for all kinds of roads to control the traffic speed. The speed limit varies from road to road basis. Previously, there was a radar system but due to high cost and less precision, the radar system is unable to become favorable in a traffic management system. Traffic management system faces different types of problems every day and it has become a researchable topic on how to solve this problem. This paper proposed a computer vision and machine learning-based automated system for multiple vehicle detection, tracking, and speed estimation of vehicles using image processing. Detection of vehicles and estimating their speed from a real-time video is tough work to do. The objective of this paper is to detect vehicles and estimate their speed as accurately as possible. So for this, a real-time video is first captured, then the frames are extracted from that video, then from that frames, the vehicles are detected, and thereafter, the tracking of vehicles starts, and finally, the speed of the moving vehicles is estimated. The goal of this method is to develop a cost-friendly system that can able to detect multiple types of vehicles at the same time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title="OpenCV">OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=Haar%20Cascade%20classifier" title=" Haar Cascade classifier"> Haar Cascade classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=DLIB" title=" DLIB"> DLIB</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=centroid%20tracker" title=" centroid tracker"> centroid tracker</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20tracking" title=" vehicle tracking"> vehicle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20speed%20estimation" title=" vehicle speed estimation"> vehicle speed estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/153549/vehicle-speed-estimation-using-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4736</span> Trajectory Planning Algorithms for Autonomous Agricultural Vehicles</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Caner%20Koc">Caner Koc</a>, <a href="https://publications.waset.org/abstracts/search?q=Dilara%20Gerdan%20Koc"> Dilara Gerdan Koc</a>, <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20Vatandas"> Mustafa Vatandas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The fundamental components of autonomous agricultural robot design, such as having a working understanding of coordinates, correctly constructing the desired route, and sensing environmental elements, are the most important. A variety of sensors, hardware, and software are employed by agricultural robots to find these systems.These enable the fully automated driving system of an autonomous vehicle to simulate how a human-driven vehicle would respond to changing environmental conditions. To calculate the vehicle's motion trajectory using data from the sensors, this automation system typically consists of a sophisticated software architecture based on object detection and driving decisions. In this study, the software architecture of an autonomous agricultural vehicle is compared to the trajectory planning techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=agriculture%205.0" title="agriculture 5.0">agriculture 5.0</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20intelligence" title=" computational intelligence"> computational intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20planning" title=" motion planning"> motion planning</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20planning" title=" trajectory planning"> trajectory planning</a> </p> <a href="https://publications.waset.org/abstracts/165714/trajectory-planning-algorithms-for-autonomous-agricultural-vehicles" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165714.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4735</span> Electrification Strategy of Hybrid Electric Vehicle as a Solution to Decrease CO2 Emission in Cities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Mourad">M. Mourad</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Mahmoud"> K. Mahmoud</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently hybrid vehicles have become a major concern as one alternative vehicles. This type of hybrid vehicle contributes greatly to reducing pollution. Therefore, this work studies the influence of electrification phase of hybrid electric vehicle on emission of vehicle at different road conditions. To accomplish this investigation, a simulation model was used to evaluate the external characteristics of the hybrid electric vehicle according to variant conditions of road resistances. Therefore, this paper reports a methodology to decrease the vehicle emission especially greenhouse gas emission inside cities. The results show the effect of electrification on vehicle performance characteristics. The results show that CO<sub>2</sub> emission of vehicle decreases up to 50.6% according to an urban driving cycle due to applying the electrification strategy for hybrid electric vehicle. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electrification%20strategy" title="electrification strategy">electrification strategy</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20electric%20vehicle" title=" hybrid electric vehicle"> hybrid electric vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=driving%20cycle" title=" driving cycle"> driving cycle</a>, <a href="https://publications.waset.org/abstracts/search?q=CO2%20emission" title=" CO2 emission"> CO2 emission</a> </p> <a href="https://publications.waset.org/abstracts/50278/electrification-strategy-of-hybrid-electric-vehicle-as-a-solution-to-decrease-co2-emission-in-cities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50278.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4734</span> Development of Real Time System for Human Detection and Localization from Unmanned Aerial Vehicle Using Optical and Thermal Sensor and Visualization on Geographic Information Systems Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nemi%20Bhattarai">Nemi Bhattarai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, there has been a rapid increase in the use of Unmanned Aerial Vehicle (UAVs) in search and rescue (SAR) operations, disaster management, and many more areas where information about the location of human beings are important. This research will primarily focus on the use of optical and thermal camera via UAV platform in real-time detection, localization, and visualization of human beings on GIS. This research will be beneficial in disaster management search of lost humans in wilderness or difficult terrain, detecting abnormal human behaviors in border or security tight areas, studying distribution of people at night, counting people density in crowd, manage people flow during evacuation, planning provisions in areas with high human density and many more. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=UAV" title="UAV">UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20detection" title=" human detection"> human detection</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=haar-like" title=" haar-like"> haar-like</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS" title=" GIS"> GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20sensor" title=" thermal sensor "> thermal sensor </a> </p> <a href="https://publications.waset.org/abstracts/81472/development-of-real-time-system-for-human-detection-and-localization-from-unmanned-aerial-vehicle-using-optical-and-thermal-sensor-and-visualization-on-geographic-information-systems-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">465</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4733</span> Traffic Analysis and Prediction Using Closed-Circuit Television Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aragorn%20Joaquin%20Pineda%20Dela%20Cruz">Aragorn Joaquin Pineda Dela Cruz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Road traffic congestion is continually deteriorating in Hong Kong. The largest contributing factor is the increase in vehicle fleet size, resulting in higher competition over the utilisation of road space. This study proposes a project that can process closed-circuit television images and videos to provide real-time traffic detection and prediction capabilities. Specifically, a deep-learning model involving computer vision techniques for video and image-based vehicle counting, then a separate model to detect and predict traffic congestion levels based on said data. State-of-the-art object detection models such as You Only Look Once and Faster Region-based Convolutional Neural Networks are tested and compared on closed-circuit television data from various major roads in Hong Kong. It is then used for training in long short-term memory networks to be able to predict traffic conditions in the near future, in an effort to provide more precise and quicker overviews of current and future traffic conditions relative to current solutions such as navigation apps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20transportation%20system" title="intelligent transportation system">intelligent transportation system</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20analysis" title=" traffic analysis"> traffic analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20prediction" title=" traffic prediction"> traffic prediction</a> </p> <a href="https://publications.waset.org/abstracts/158196/traffic-analysis-and-prediction-using-closed-circuit-television-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158196.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4732</span> Challenges in Video Based Object Detection in Maritime Scenario Using Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dilip%20K.%20Prasad">Dilip K. Prasad</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Krishna%20Prasath"> C. Krishna Prasath</a>, <a href="https://publications.waset.org/abstracts/search?q=Deepu%20Rajan"> Deepu Rajan</a>, <a href="https://publications.waset.org/abstracts/search?q=Lily%20Rachmawati"> Lily Rachmawati</a>, <a href="https://publications.waset.org/abstracts/search?q=Eshan%20Rajabally"> Eshan Rajabally</a>, <a href="https://publications.waset.org/abstracts/search?q=Chai%20Quek"> Chai Quek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20maritime%20vehicle" title="autonomous maritime vehicle">autonomous maritime vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=situation%20awareness" title=" situation awareness"> situation awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/54887/challenges-in-video-based-object-detection-in-maritime-scenario-using-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54887.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">458</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4731</span> Unauthorized License Verifier and Secure Access to Vehicle </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G.%20Prakash">G. Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Mohamed%20Aasiq"> L. Mohamed Aasiq</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Dhivya"> N. Dhivya</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jothi%20Mani"> M. Jothi Mani</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Mounika"> R. Mounika</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Gomathi"> B. Gomathi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In our day to day life, many people met with an accident due to various reasons like over speed, overload in the vehicle, violation of the traffic rules, etc. Driving license system is difficult task for the government to monitor. To prevent non-licensees from driving who are causing most of the accidents, a new system is proposed. The proposed system consists of a smart card capable of storing the license details of a particular person. Vehicles such as cars, bikes etc., should have a card reader capable of reading the particular license. A person, who wishes to drive the vehicle, should insert the card (license) in the vehicle and then enter the password in the keypad. If the license data stored in the card and database about the entire license holders in the microcontroller matches, he/she can proceed for ignition after the automated opening of the fuel tank valve, otherwise the user is restricted to use the vehicle. Moreover, overload detector in our proposed system verifies and then prompts the user to avoid overload before driving. This increases the security of vehicles and also ensures safe driving by preventing accidents. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=license" title="license">license</a>, <a href="https://publications.waset.org/abstracts/search?q=verifier" title=" verifier"> verifier</a>, <a href="https://publications.waset.org/abstracts/search?q=EEPROM" title=" EEPROM"> EEPROM</a>, <a href="https://publications.waset.org/abstracts/search?q=secure" title=" secure"> secure</a>, <a href="https://publications.waset.org/abstracts/search?q=overload%20detection" title=" overload detection"> overload detection</a> </p> <a href="https://publications.waset.org/abstracts/3963/unauthorized-license-verifier-and-secure-access-to-vehicle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">242</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4730</span> Self-Directed-Car on GT Road: Grand Trunk Road</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rameez%20Ahmad">Rameez Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Aqib%20Mehmood"> Aqib Mehmood</a>, <a href="https://publications.waset.org/abstracts/search?q=Imran%20Khan"> Imran Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Self-directed car (SDC) that can drive itself from one fact to another without support from a driver. Certain trust that self-directed car obligate the probable to transform the transportation manufacturing while essentially removing coincidences, and cleaning up the environment. This study realizes the effects that SDC (also called a self-driving, driver or robotic) vehicle travel demands and ride scheme is likely to have. Without the typical obstacles that allows detection of a audio vision based hardware and software construction (It (SDC) and cost benefits, the vehicle technologies, Gold (Generic Obstacle and Lane Detection) to a knowledge-based system to predict their potential and consider the shape, color, or balance) and an organized environment with colored lane patterns, lane position ban. Discovery the problematic consequence of (SDC) on GT (grand trunk road) road and brand the car further effectual. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SDC" title="SDC">SDC</a>, <a href="https://publications.waset.org/abstracts/search?q=gold" title=" gold"> gold</a>, <a href="https://publications.waset.org/abstracts/search?q=GT" title=" GT"> GT</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge-based%20system" title=" knowledge-based system"> knowledge-based system</a> </p> <a href="https://publications.waset.org/abstracts/30033/self-directed-car-on-gt-road-grand-trunk-road" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30033.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4729</span> Internet of Things-Based Electric Vehicle Charging Notification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nagarjuna%20Pitty">Nagarjuna Pitty</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is believed invention “Advanced Method and Process Quick Electric Vehicle Charging” is an Electric Vehicles (EVs) are quickly turning into the heralds of vehicle innovation. This study endeavors to address the inquiries of how module charging process correspondence has been performed between the EV and Electric Vehicle Supply Equipment (EVSE). The energy utilization of gas-powered motors is higher than that of electric engines. An invention is related to an Advanced Method and Process Quick Electric Vehicle Charging. In this research paper, readings on the electric vehicle charging approaches will be checked, and the module charging phases will be described comprehensively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electric" title="electric">electric</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle" title=" vehicle"> vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=charging" title=" charging"> charging</a>, <a href="https://publications.waset.org/abstracts/search?q=notification" title=" notification"> notification</a>, <a href="https://publications.waset.org/abstracts/search?q=IoT" title=" IoT"> IoT</a>, <a href="https://publications.waset.org/abstracts/search?q=supply" title=" supply"> supply</a>, <a href="https://publications.waset.org/abstracts/search?q=equipment" title=" equipment"> equipment</a> </p> <a href="https://publications.waset.org/abstracts/166037/internet-of-things-based-electric-vehicle-charging-notification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=158">158</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=159">159</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20detection&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>