CINXE.COM
Search results for: video surveillance
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: video surveillance</title> <meta name="description" content="Search results for: video surveillance"> <meta name="keywords" content="video surveillance"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="video surveillance" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="video surveillance"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1376</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: video surveillance</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1376</span> Detection and Tracking for the Protection of the Elderly and Socially Vulnerable People in the Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mobarok%20Hossain%20Bhuyain">Mobarok Hossain Bhuyain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video surveillance processing has attracted various security fields transforming it into one of the leading research fields. Today's demand for detection and tracking of human mobility for security is very useful for human security, such as in crowded areas. Accordingly, video surveillance technology has seen a rapid advancement in recent years, with algorithms analyzing the behavior of people under surveillance automatically. The main motivation of this research focuses on the detection and tracking of the elderly and socially vulnerable people in crowded areas. Degenerate people are a major health concern, especially for elderly people and socially vulnerable people. One major disadvantage of video surveillance is the need for continuous monitoring, especially in crowded areas. To assist the security monitoring live surveillance video, image processing, and artificial intelligence methods can be used to automatically send warning signals to the monitoring officers about elderly people and socially vulnerable people. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20detection" title="human detection">human detection</a>, <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title=" target tracking"> target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/131472/detection-and-tracking-for-the-protection-of-the-elderly-and-socially-vulnerable-people-in-the-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1375</span> H.264 Video Privacy Protection Method Using Regions of Interest Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taekyun%20Doo">Taekyun Doo</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheongmin%20Ji"> Cheongmin Ji</a>, <a href="https://publications.waset.org/abstracts/search?q=Manpyo%20Hong"> Manpyo Hong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Like a closed-circuit television (CCTV), video surveillance system is widely placed for gathering video from unspecified people to prevent crime, surveillance, or many other purposes. However, abuse of CCTV brings about concerns of personal privacy invasions. In this paper, we propose an encryption method to protect personal privacy system in H.264 compressed video bitstream with encrypting only regions of interest (ROI). There is no need to change the existing video surveillance system. In addition, encrypting ROI in compressed video bitstream is a challenging work due to spatial and temporal drift errors. For this reason, we propose a novel drift mitigation method when ROI is encrypted. The proposed method was implemented by using JM reference software based on the H.264 compressed videos, and experimental results show the verification of our proposed methods and its effectiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.264%2FAVC" title="H.264/AVC">H.264/AVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20encryption" title=" video encryption"> video encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=privacy%20protection" title=" privacy protection"> privacy protection</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20compression" title=" post compression"> post compression</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a> </p> <a href="https://publications.waset.org/abstracts/57651/h264-video-privacy-protection-method-using-regions-of-interest-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">340</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1374</span> Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Alavianmehr">M. A. Alavianmehr</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tashk"> A. Tashk</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sodagaran"> A. Sodagaran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20models" title=" background models"> background models</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=foreground%20detection" title=" foreground detection"> foreground detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a> </p> <a href="https://publications.waset.org/abstracts/16364/video-foreground-detection-based-on-adaptive-mixture-gaussian-model-for-video-surveillance-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16364.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1373</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1372</span> Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Jasim%20Habeeb">Nada Jasim Habeeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Rana%20Saad%20Mohammed"> Rana Saad Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Muntaha%20Khudair%20Abbass"> Muntaha Khudair Abbass </a> </p> <p class="card-text"><strong>Abstract:</strong></p> For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20differencing" title="temporal differencing">temporal differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title=" video summarization"> video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20differencing" title=" histogram differencing"> histogram differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20conditional%20variance" title=" sum conditional variance"> sum conditional variance</a> </p> <a href="https://publications.waset.org/abstracts/54404/surveillance-video-summarization-based-on-histogram-differencing-and-sum-conditional-variance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1371</span> Remote Video Supervision via DVB-H Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hanen%20Ghabi">Hanen Ghabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youssef%20Oudhini"> Youssef Oudhini</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassen%20Mnif"> Hassen Mnif</a> </p> <p class="card-text"><strong>Abstract:</strong></p> By reference to recent publications dealing with the same problem, and as a follow-up to this research work already published, we propose in this article a new original idea of tele supervision exploiting the opportunities offered by the DVB-H system. The objective is to exploit the RF channels of the DVB-H network in order to insert digital remote monitoring images dedicated to a remote solar power plant. Indeed, the DVB-H (Digital Video Broadcast-Handheld) broadcasting system was designed and deployed for digital broadcasting on the same platform as the parent system, DVB-T. We claim to be able to exploit this approach in order to satisfy the operator of remote photovoltaic sites (and others) in order to remotely control the components of isolated installations by means of video surveillance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title="video surveillance">video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20video%20broadcast-handheld" title=" digital video broadcast-handheld"> digital video broadcast-handheld</a>, <a href="https://publications.waset.org/abstracts/search?q=photovoltaic%20sites" title=" photovoltaic sites"> photovoltaic sites</a>, <a href="https://publications.waset.org/abstracts/search?q=AVC" title=" AVC"> AVC</a> </p> <a href="https://publications.waset.org/abstracts/147516/remote-video-supervision-via-dvb-h-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1370</span> Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuan-Hsiang%20Chang">Yuan-Hsiang Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pin-Chi%20Lin"> Pin-Chi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Li-Der%20Jeng"> Li-Der Jeng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title="motion detection">motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title=" motion tracking"> motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20analysis" title=" trajectory analysis"> trajectory analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/13650/automatic-motion-trajectory-analysis-for-dual-human-interaction-using-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">548</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1369</span> A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mulugeta%20K.%20Tefera">Mulugeta K. Tefera</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaolong%20Yang"> Xiaolong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Liu"> Jian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20modeling" title="background modeling">background modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-frame%20difference" title=" inter-frame difference"> inter-frame difference</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20and%20tracking" title=" object detection and tracking"> object detection and tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/78578/a-real-time-moving-object-detection-and-tracking-scheme-and-its-implementation-for-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1368</span> Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abe%20Degale%20D.">Abe Degale D.</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=violence%20detection" title="violence detection">violence detection</a>, <a href="https://publications.waset.org/abstracts/search?q=faster%20RCNN" title=" faster RCNN"> faster RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning%20and" title=" transfer learning and"> transfer learning and</a>, <a href="https://publications.waset.org/abstracts/search?q=surveillance%20video" title=" surveillance video"> surveillance video</a> </p> <a href="https://publications.waset.org/abstracts/171296/violence-detection-and-tracking-on-moving-surveillance-video-using-machine-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1367</span> A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20Remya">R. S. Remya</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20Sethulekshmi"> U. S. Sethulekshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow%20variation" title=" optical flow variation"> optical flow variation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20tampering" title=" video tampering"> video tampering</a> </p> <a href="https://publications.waset.org/abstracts/45252/a-passive-digital-video-authentication-technique-using-wavelet-based-optical-flow-variation-thresholding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45252.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1366</span> Efficient Utilization of Unmanned Aerial Vehicle (UAV) for Fishing through Surveillance for Fishermen</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Ahilan">T. Ahilan</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Aswin%20Adityan"> V. Aswin Adityan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Kailash"> S. Kailash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> UAV’s are small remote operated or automated aerial surveillance systems without a human pilot aboard. UAV’s generally finds its use in military and special operation application, a recent growing trend in UAV’s finds its application in several civil and non military works such as inspection of power or pipelines. The objective of this paper is the augmentation of a UAV in order to replace the existing expensive sonar (sound navigation and ranging) based equipment amongst small scale fisherman, for whom access to sonar equipment are restricted due to limited economic resources. The surveillance equipment’s present in the UAV will relay data and GPS location onto a receiver on the fishing boat using RF signals, using which the location of the schools of fishes can be found. In addition to this, an emergency beacon system is present for rescue operations and drone recovery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=UAV" title="UAV">UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=Surveillance" title=" Surveillance"> Surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=RF%20signals" title=" RF signals"> RF signals</a>, <a href="https://publications.waset.org/abstracts/search?q=fishing" title=" fishing"> fishing</a>, <a href="https://publications.waset.org/abstracts/search?q=sonar" title=" sonar"> sonar</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20stream" title=" video stream"> video stream</a>, <a href="https://publications.waset.org/abstracts/search?q=school%20of%20fish" title=" school of fish"> school of fish</a> </p> <a href="https://publications.waset.org/abstracts/34394/efficient-utilization-of-unmanned-aerial-vehicle-uav-for-fishing-through-surveillance-for-fishermen" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1365</span> User Authentication Using Graphical Password with Sound Signature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Devi%20Srinivas">Devi Srinivas</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Sindhuja"> K. Sindhuja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents architecture to improve surveillance applications based on the usage of the service oriented paradigm, with smart phones as user terminals, allowing application dynamic composition and increasing the flexibility of the system. According to the result of moving object detection research on video sequences, the movement of the people is tracked using video surveillance. The moving object is identified using the image subtraction method. The background image is subtracted from the foreground image, from that the moving object is derived. So the Background subtraction algorithm and the threshold value is calculated to find the moving image by using background subtraction algorithm the moving frame is identified. Then, by the threshold value the movement of the frame is identified and tracked. Hence, the movement of the object is identified accurately. This paper deals with low-cost intelligent mobile phone-based wireless video surveillance solution using moving object recognition technology. The proposed solution can be useful in various security systems and environmental surveillance. The fundamental rule of moving object detecting is given in the paper, then, a self-adaptive background representation that can update automatically and timely to adapt to the slow and slight changes of normal surroundings is detailed. While the subtraction of the present captured image and the background reaches a certain threshold, a moving object is measured to be in the current view, and the mobile phone will automatically notify the central control unit or the user through SMS (Short Message System). The main advantage of this system is when an unknown image is captured by the system it will alert the user automatically by sending an SMS to user’s mobile. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=security" title="security">security</a>, <a href="https://publications.waset.org/abstracts/search?q=graphical%20password" title=" graphical password"> graphical password</a>, <a href="https://publications.waset.org/abstracts/search?q=persuasive%20cued%20click%20points" title=" persuasive cued click points"> persuasive cued click points</a> </p> <a href="https://publications.waset.org/abstracts/23794/user-authentication-using-graphical-password-with-sound-signature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23794.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">537</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1364</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1363</span> Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Alqahtani">Hamed Alqahtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Manolya%20Kavakli-Thorne"> Manolya Kavakli-Thorne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disentanglement" title="disentanglement">disentanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/108319/adversarial-disentanglement-using-latent-classifier-for-pose-independent-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108319.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1362</span> Video Summarization: Techniques and Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zaynab%20El%20Khattabi">Zaynab El Khattabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youness%20Tabii"> Youness Tabii</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhamid%20Benkaddour"> Abdelhamid Benkaddour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=static%20summarization" title=" static summarization"> static summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20skimming" title=" video skimming"> video skimming</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20features" title=" semantic features"> semantic features</a> </p> <a href="https://publications.waset.org/abstracts/27644/video-summarization-techniques-and-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1361</span> Video Based Automatic License Plate Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Ganoun">Ali Ganoun</a>, <a href="https://publications.waset.org/abstracts/search?q=Wesam%20Algablawi"> Wesam Algablawi</a>, <a href="https://publications.waset.org/abstracts/search?q=Wasim%20BenAnaif"> Wasim BenAnaif </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video based traffic surveillance based on License Plate Recognition (LPR) system is an essential part for any intelligent traffic management system. The LPR system utilizes computer vision and pattern recognition technologies to obtain traffic and road information by detecting and recognizing vehicles based on their license plates. Generally, the video based LPR system is a challenging area of research due to the variety of environmental conditions. The LPR systems used in a wide range of commercial applications such as collision warning systems, finding stolen cars, controlling access to car parks and automatic congestion charge systems. This paper presents an automatic LPR system of Libyan license plate. The performance of the proposed system is evaluated with three video sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=license%20plate%20recognition" title="license plate recognition">license plate recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition" title=" recognition"> recognition</a> </p> <a href="https://publications.waset.org/abstracts/9958/video-based-automatic-license-plate-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1360</span> Human Behavior Modeling in Video Surveillance of Conference Halls </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nour%20Charara">Nour Charara</a>, <a href="https://publications.waset.org/abstracts/search?q=Hussein%20Charara"> Hussein Charara</a>, <a href="https://publications.waset.org/abstracts/search?q=Omar%20Abou%20Khaled"> Omar Abou Khaled</a>, <a href="https://publications.waset.org/abstracts/search?q=Hani%20Abdallah"> Hani Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Elena%20Mugellini"> Elena Mugellini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activity%20modeling" title="activity modeling">activity modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=PLSA" title=" PLSA"> PLSA</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20representation" title=" video representation"> video representation</a> </p> <a href="https://publications.waset.org/abstracts/70466/human-behavior-modeling-in-video-surveillance-of-conference-halls" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">394</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1359</span> Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Harraj%20Abdeslam">El Harraj Abdeslam</a>, <a href="https://publications.waset.org/abstracts/search?q=Raissouni%20Naoufal"> Raissouni Naoufal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title="video surveillance">video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title=" background subtraction"> background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20limited%20histogram%20equalization" title=" contrast limited histogram equalization"> contrast limited histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=illumination%20invariance" title=" illumination invariance"> illumination invariance</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=behavior%20understanding" title=" behavior understanding"> behavior understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20scenes" title=" dynamic scenes"> dynamic scenes</a> </p> <a href="https://publications.waset.org/abstracts/27499/toward-indoor-and-outdoor-surveillance-using-an-improved-fast-background-subtraction-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1358</span> Integrated Intensity and Spatial Enhancement Technique for Color Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evan%20W.%20Krieger">Evan W. Krieger</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayan%20K.%20Asari"> Vijayan K. Asari</a>, <a href="https://publications.waset.org/abstracts/search?q=Saibabu%20Arigela"> Saibabu Arigela</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20range%20compression" title="dynamic range compression">dynamic range compression</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-level%20Fourier%20features" title=" multi-level Fourier features"> multi-level Fourier features</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20enhancement" title=" nonlinear enhancement"> nonlinear enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=super%20resolution" title=" super resolution"> super resolution</a> </p> <a href="https://publications.waset.org/abstracts/22706/integrated-intensity-and-spatial-enhancement-technique-for-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22706.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">554</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1357</span> High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hanaa%20M.%20Abdelgawad">Hanaa M. Abdelgawad</a>, <a href="https://publications.waset.org/abstracts/search?q=Mona%20Safar"> Mona Safar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20M.%20Wahba"> Ayman M. Wahba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high%20level%20synthesis" title="high level synthesis">high level synthesis</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection" title=" canny edge detection"> canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hardware%20accelerators" title=" hardware accelerators"> hardware accelerators</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/21304/high-level-synthesis-of-canny-edge-detection-algorithm-on-zynq-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21304.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1356</span> Performance of High Efficiency Video Codec over Wireless Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Ayyub%20Khan">Mohd Ayyub Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadeem%20Akhtar"> Nadeem Akhtar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWGN" title="AWGN">AWGN</a>, <a href="https://publications.waset.org/abstracts/search?q=forward%20error%20correction" title=" forward error correction"> forward error correction</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20coding" title=" video coding"> video coding</a>, <a href="https://publications.waset.org/abstracts/search?q=QAM" title=" QAM"> QAM</a> </p> <a href="https://publications.waset.org/abstracts/92062/performance-of-high-efficiency-video-codec-over-wireless-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92062.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1355</span> Lecture Video Indexing and Retrieval Using Topic Keywords</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20J.%20Sandesh">B. J. Sandesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabha%20Jirgi"> Saurabha Jirgi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Vidya"> S. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=Prakash%20Eljer"> Prakash Eljer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gowri%20Srinivasa"> Gowri Srinivasa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20indexing%20and%20retrieval" title="video indexing and retrieval">video indexing and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20video%20search" title=" content based video search"> content based video search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20indexing" title=" multimodal indexing"> multimodal indexing</a> </p> <a href="https://publications.waset.org/abstracts/77066/lecture-video-indexing-and-retrieval-using-topic-keywords" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1354</span> Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20N.%20Raju">U. S. N. Raju</a>, <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sai%20Kiran"> Kothuri Sai Kiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Meena%20G.%20Kamal"> Meena G. Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Nikhil%20Pabba"> Vinay Nikhil Pabba</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Kanaparthi"> Suresh Kanaparthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data" title=" big video data"> big video data</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title=" video retrieval"> video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=hadoop" title=" hadoop"> hadoop</a> </p> <a href="https://publications.waset.org/abstracts/26648/distributed-processing-for-content-based-lecture-video-retrieval-on-hadoop-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1353</span> Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omair%20Ghori">Omair Ghori</a>, <a href="https://publications.waset.org/abstracts/search?q=Anton%20Stadler"> Anton Stadler</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Wilk"> Stefan Wilk</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20Effelsberg"> Wolfgang Effelsberg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20analysis" title="contrast analysis">contrast analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20fire%20detection" title=" early fire detection"> early fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20smoke%20detection" title=" video smoke detection"> video smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/52006/video-based-ambient-smoke-detection-by-detecting-directional-contrast-decrease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1352</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1351</span> Structural Analysis on the Composition of Video Game Virtual Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qin%20Luofeng">Qin Luofeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Siqi"> Shen Siqi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20game" title="video game">video game</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20space" title=" virtual space"> virtual space</a>, <a href="https://publications.waset.org/abstracts/search?q=narrativity" title=" narrativity"> narrativity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20space" title=" social space"> social space</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20connection" title=" emotional connection"> emotional connection</a> </p> <a href="https://publications.waset.org/abstracts/118519/structural-analysis-on-the-composition-of-video-game-virtual-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1350</span> Key Frame Based Video Summarization via Dependency Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janya%20Sainui">Janya Sainui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20measure" title=" dependency measure"> dependency measure</a>, <a href="https://publications.waset.org/abstracts/search?q=quadratic%20mutual%20information" title=" quadratic mutual information"> quadratic mutual information</a> </p> <a href="https://publications.waset.org/abstracts/75218/key-frame-based-video-summarization-via-dependency-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1349</span> Intervention of Threat and Surveillance on the Obedience of Preschool Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sarah%20Mhae%20Diaz">Sarah Mhae Diaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Erika%20Anna%20De%20Leon"> Erika Anna De Leon</a>, <a href="https://publications.waset.org/abstracts/search?q=Jacklin%20Alwil%20Cartagena"> Jacklin Alwil Cartagena</a>, <a href="https://publications.waset.org/abstracts/search?q=Geordan%20Caruncong"> Geordan Caruncong</a>, <a href="https://publications.waset.org/abstracts/search?q=Micah%20Riezl%20Gonzales"> Micah Riezl Gonzales</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study examined the intervention of threat and surveillance on the obedience of 100 preschool children through a task variable experiment replicated from the previous studies of Higbee (1979), and Chua, J., Chua, M., & Pico (1983). Nowadays, obedience among Filipino children to authority is disregarded since they are more outspoken and rebel due to social influences. With this, aside from corporal punishment, threat and surveillance became a mean of inducing obedience. Threat, according to the Dissonance Theory, can give attitudinal change. On the other hand, surveillance, according to the Theory of Social Facilitation, can either contribute to the completion or failure to do a task. Through a 2x2 factorial design, results show; (1) threat (F(1,96) = 12.487, p < 0.05) and (2) surveillance (F(1,96)=9.942, p<.05) had a significant main effect on obedience, suggesting that the Dissonance Theory and Theory of Social Facilitation is respectively true in the study. On the other hand, (3) no interaction (F(1,96)=1.303, p > .05) was seen since threat and surveillance both have a main effect that could be positive or negative, or could be because of their complementary property as supported by the post-hoc results. Also, (4) most effective commanding style is threat and surveillance setting (M = 30.04, SD = 7.971) due to the significant main effect of the two variables. With this, in the Filipino Setting, threat and surveillance has proven to be a very effective strategy to discipline and induce obedience from a child. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=experimental%20study" title="experimental study">experimental study</a>, <a href="https://publications.waset.org/abstracts/search?q=obedience" title=" obedience"> obedience</a>, <a href="https://publications.waset.org/abstracts/search?q=preschool%20children" title=" preschool children"> preschool children</a>, <a href="https://publications.waset.org/abstracts/search?q=surveillance" title=" surveillance"> surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=threat" title=" threat"> threat</a> </p> <a href="https://publications.waset.org/abstracts/27034/intervention-of-threat-and-surveillance-on-the-obedience-of-preschool-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27034.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">488</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1348</span> Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao-Hong%20Tsai">Yao-Hong Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title="unmanned aerial vehicle">unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=collision%20avoidance" title=" collision avoidance"> collision avoidance</a> </p> <a href="https://publications.waset.org/abstracts/99181/vision-based-collision-avoidance-for-unmanned-aerial-vehicles-by-recurrent-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99181.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1347</span> VideoAssist: A Labelling Assistant to Increase Efficiency in Annotating Video-Based Fire Dataset Using a Foundation Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keyur%20Joshi">Keyur Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Philip%20Dietrich"> Philip Dietrich</a>, <a href="https://publications.waset.org/abstracts/search?q=Tjark%20Windisch"> Tjark Windisch</a>, <a href="https://publications.waset.org/abstracts/search?q=Markus%20K%C3%B6nig"> Markus König</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of surveillance-based fire detection, the volume of incoming data is increasing rapidly. However, the labeling of a large industrial dataset is costly due to the high annotation costs associated with current state-of-the-art methods, which often require bounding boxes or segmentation masks for model training. This paper introduces VideoAssist, a video annotation solution that utilizes a video-based foundation model to annotate entire videos with minimal effort, requiring the labeling of bounding boxes for only a few keyframes. To the best of our knowledge, VideoAssist is the first method to significantly reduce the effort required for labeling fire detection videos. The approach offers bounding box and segmentation annotations for the video dataset with minimal manual effort. Results demonstrate that the performance of labels annotated by VideoAssist is comparable to those annotated by humans, indicating the potential applicability of this approach in fire detection scenarios. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fire%20detection" title="fire detection">fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=label%20annotation" title=" label annotation"> label annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=foundation%20models" title=" foundation models"> foundation models</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/194622/videoassist-a-labelling-assistant-to-increase-efficiency-in-annotating-video-based-fire-dataset-using-a-foundation-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">7</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=45">45</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=46">46</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20surveillance&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>