CINXE.COM

Search results for: moving objects

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: moving objects</title> <meta name="description" content="Search results for: moving objects"> <meta name="keywords" content="moving objects"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="moving objects" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="moving objects"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1867</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: moving objects</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1867</span> Searching k-Nearest Neighbors to be Appropriate under Gaming Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae%20Moon%20Lee">Jae Moon Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In general, algorithms to find continuous k-nearest neighbors have been researched on the location based services, monitoring periodically the moving objects such as vehicles and mobile phone. Those researches assume the environment that the number of query points is much less than that of moving objects and the query points are not moved but fixed. In gaming environments, this problem is when computing the next movement considering the neighbors such as flocking, crowd and robot simulations. In this case, every moving object becomes a query point so that the number of query point is same to that of moving objects and the query points are also moving. In this paper, we analyze the performance of the existing algorithms focused on location based services how they operate under gaming environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=flocking%20behavior" title="flocking behavior">flocking behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=heterogeneous%20agents" title=" heterogeneous agents"> heterogeneous agents</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity" title=" similarity"> similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/8228/searching-k-nearest-neighbors-to-be-appropriate-under-gaming-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8228.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1866</span> A Background Subtraction Based Moving Object Detection Around the Host Vehicle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Lim">Hyojin Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuong%20Nguyen%20Khac"> Cuong Nguyen Khac</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose moving object detection method which is helpful for driver to safely take his/her car out of parking lot. When moving objects such as motorbikes, pedestrians, the other cars and some obstacles are detected at the rear-side of host vehicle, the proposed algorithm can provide to driver warning. We assume that the host vehicle is just before departure. Gaussian Mixture Model (GMM) based background subtraction is basically applied. Pre-processing such as smoothing and post-processing as morphological filtering are added.We examine “which color space has better performance for detection of moving objects?” Three color spaces including RGB, YCbCr, and Y are applied and compared, in terms of detection rate. Through simulation, we prove that RGB space is more suitable for moving object detection based on background subtraction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gaussian%20mixture%20model" title="gaussian mixture model">gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title=" background subtraction"> background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title=" moving object detection"> moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20space" title=" color space"> color space</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20filtering" title=" morphological filtering"> morphological filtering</a> </p> <a href="https://publications.waset.org/abstracts/32650/a-background-subtraction-based-moving-object-detection-around-the-host-vehicle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">617</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1865</span> A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mulugeta%20K.%20Tefera">Mulugeta K. Tefera</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaolong%20Yang"> Xiaolong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Liu"> Jian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20modeling" title="background modeling">background modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-frame%20difference" title=" inter-frame difference"> inter-frame difference</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20and%20tracking" title=" object detection and tracking"> object detection and tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/78578/a-real-time-moving-object-detection-and-tracking-scheme-and-its-implementation-for-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1864</span> Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Naoum%20Abderrahmane">Naoum Abderrahmane</a>, <a href="https://publications.waset.org/abstracts/search?q=Boumehed%20Meriem"> Boumehed Meriem</a>, <a href="https://publications.waset.org/abstracts/search?q=Alshaqaqi%20Belal"> Alshaqaqi Belal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title="background subtraction">background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title=" moving object detection"> moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-ICA" title=" fast-ICA"> fast-ICA</a>, <a href="https://publications.waset.org/abstracts/search?q=de-mixing%20matrix" title=" de-mixing matrix"> de-mixing matrix</a> </p> <a href="https://publications.waset.org/abstracts/156716/evaluation-of-real-time-background-subtraction-technique-for-moving-object-detection-using-fast-independent-component-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156716.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">96</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1863</span> An Experimental Investigation on the Amount of Drag Force of Sand on a Cone Moving at Low Uniform Speed</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jahanandish">M. Jahanandish</a>, <a href="https://publications.waset.org/abstracts/search?q=Gh.%20Sadeghian"> Gh. Sadeghian</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20H.%20Daneshvar"> M. H. Daneshvar</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20H.%20Jahanandish"> M. H. Jahanandish</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The amount of resistance of a particular medium like soil to the moving objects is the interest of many areas in science. These include soil mechanics, geotechnical engineering, powder mechanics etc. Knowledge of drag force is also used for estimating the amount of momentum of fired objects like bullets. This paper focuses on measurement of drag force of sand on a cone when it moves at a low constant speed. A 30-degree apex angle cone has been used for this purpose. The study consisted of both loose and dense conditions of the soil. The applied speed has been in the range of 0.1 to 10 mm/min. The results indicate that the required force is basically independent of the cone speed; but, it is very dependent on the material densification and confining stress. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drag%20force" title="drag force">drag force</a>, <a href="https://publications.waset.org/abstracts/search?q=sand" title=" sand"> sand</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20speed" title=" moving speed"> moving speed</a>, <a href="https://publications.waset.org/abstracts/search?q=friction%20angle" title=" friction angle"> friction angle</a>, <a href="https://publications.waset.org/abstracts/search?q=densification" title=" densification"> densification</a>, <a href="https://publications.waset.org/abstracts/search?q=confining%20stress" title=" confining stress"> confining stress</a> </p> <a href="https://publications.waset.org/abstracts/58734/an-experimental-investigation-on-the-amount-of-drag-force-of-sand-on-a-cone-moving-at-low-uniform-speed" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1862</span> Clustering Color Space, Time Interest Points for Moving Objects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Bellamine">Insaf Bellamine</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Tairi"> Hamid Tairi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting moving objects in sequences is an essential step for video analysis. This paper mainly contributes to the Color Space-Time Interest Points (CSTIP) extraction and detection. We propose a new method for detection of moving objects. Two main steps compose the proposed method. First, we suggest to apply the algorithm of the detection of Color Space-Time Interest Points (CSTIP) on both components of the Color Structure-Texture Image Decomposition which is based on a Partial Differential Equation (PDE): a color geometric structure component and a color texture component. A descriptor is associated to each of these points. In a second stage, we address the problem of grouping the points (CSTIP) into clusters. Experiments and comparison to other motion detection methods on challenging sequences show the performance of the proposed method and its utility for video analysis. Experimental results are obtained from very different types of videos, namely sport videos and animation movies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Color%20Space-Time%20Interest%20Points%20%28CSTIP%29" title="Color Space-Time Interest Points (CSTIP)">Color Space-Time Interest Points (CSTIP)</a>, <a href="https://publications.waset.org/abstracts/search?q=Color%20Structure-Texture%20Image%20Decomposition" title=" Color Structure-Texture Image Decomposition"> Color Structure-Texture Image Decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Detection" title=" Motion Detection"> Motion Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a> </p> <a href="https://publications.waset.org/abstracts/21989/clustering-color-space-time-interest-points-for-moving-objects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1861</span> Pyramidal Lucas-Kanade Optical Flow Based Moving Object Detection in Dynamic Scenes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Lim">Hyojin Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuong%20Nguyen%20Khac"> Cuong Nguyen Khac</a>, <a href="https://publications.waset.org/abstracts/search?q=Yeongyu%20Choi"> Yeongyu Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a simple moving object detection, which is based on motion vectors obtained from pyramidal Lucas-Kanade optical flow. The proposed method detects moving objects such as pedestrians, the other vehicles and some obstacles at the front-side of the host vehicle, and it can provide the warning to the driver. Motion vectors are obtained by using pyramidal Lucas-Kanade optical flow, and some outliers are eliminated by comparing the amplitude of each vector with the pre-defined threshold value. The background model is obtained by calculating the mean and the variance of the amplitude of recent motion vectors in the rectangular shaped local region called the cell. The model is applied as the reference to classify motion vectors of moving objects and those of background. Motion vectors are clustered to rectangular regions by using the unsupervised clustering K-means algorithm. Labeling method is applied to label groups which is close to each other, using by distance between each center points of rectangular. Through the simulations tested on four kinds of scenarios such as approaching motorbike, vehicle, and pedestrians to host vehicle, we prove that the proposed is simple but efficient for moving object detection in parking lots. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title="moving object detection">moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20scene" title=" dynamic scene"> dynamic scene</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=pyramidal%20optical%20flow" title=" pyramidal optical flow"> pyramidal optical flow</a> </p> <a href="https://publications.waset.org/abstracts/50958/pyramidal-lucas-kanade-optical-flow-based-moving-object-detection-in-dynamic-scenes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1860</span> Automatic Detection and Update of Region of Interest in Vehicular Traffic Surveillance Videos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Naydelis%20Brito%20Su%C3%A1rez">Naydelis Brito Suárez</a>, <a href="https://publications.waset.org/abstracts/search?q=Deni%20Librado%20Torres%20Rom%C3%A1n"> Deni Librado Torres Román</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20Hermosillo%20Reynoso"> Fernando Hermosillo Reynoso</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic detection and generation of a dynamic ROI (Region of Interest) in vehicle traffic surveillance videos based on a static camera in Intelligent Transportation Systems is challenging for computer vision-based systems. The dynamic ROI, being a changing ROI, should capture any other moving object located outside of a static ROI. In this work, the video is represented by a Tensor model composed of a Background and a Foreground Tensor, which contains all moving vehicles or objects. The values of each pixel over a time interval are represented by time series, and some pixel rows were selected. This paper proposes a pixel entropy-based algorithm for automatic detection and generation of a dynamic ROI in traffic videos under the assumption of two types of theoretical pixel entropy behaviors: (1) a pixel located at the road shows a high entropy value due to disturbances in this zone by vehicle traffic, (2) a pixel located outside the road shows a relatively low entropy value. To study the statistical behavior of the selected pixels, detecting the entropy changes and consequently moving objects, Shannon, Tsallis, and Approximate entropies were employed. Although Tsallis entropy achieved very high results in real-time, Approximate entropy showed results slightly better but in greater time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convex%20hull" title="convex hull">convex hull</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20ROI%20detection" title=" dynamic ROI detection"> dynamic ROI detection</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20entropy" title=" pixel entropy"> pixel entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series" title=" time series"> time series</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20objects" title=" moving objects"> moving objects</a> </p> <a href="https://publications.waset.org/abstracts/174020/automatic-detection-and-update-of-region-of-interest-in-vehicular-traffic-surveillance-videos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1859</span> Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Alavianmehr">M. A. Alavianmehr</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tashk"> A. Tashk</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sodagaran"> A. Sodagaran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20models" title=" background models"> background models</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=foreground%20detection" title=" foreground detection"> foreground detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a> </p> <a href="https://publications.waset.org/abstracts/16364/video-foreground-detection-based-on-adaptive-mixture-gaussian-model-for-video-surveillance-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16364.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1858</span> An Efficient Fundamental Matrix Estimation for Moving Object Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yeongyu%20Choi">Yeongyu Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Lee"> S. M. Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object&rsquo;s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=corner%20detection" title="corner detection">corner detection</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=epipolar%20geometry" title=" epipolar geometry"> epipolar geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a> </p> <a href="https://publications.waset.org/abstracts/79103/an-efficient-fundamental-matrix-estimation-for-moving-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79103.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1857</span> Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyedamir%20Makinejadsanij">Seyedamir Makinejadsanij</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=k-means" title="k-means">k-means</a>, <a href="https://publications.waset.org/abstracts/search?q=overhead%20crane" title=" overhead crane"> overhead crane</a>, <a href="https://publications.waset.org/abstracts/search?q=melt%20weight" title=" melt weight"> melt weight</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20estimation" title=" weight estimation"> weight estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=swing%20problem" title=" swing problem"> swing problem</a> </p> <a href="https://publications.waset.org/abstracts/164444/weight-estimation-using-the-k-means-method-in-steelmakings-overhead-cranes-in-order-to-reduce-swing-error" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164444.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1856</span> Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuan-Hsiang%20Chang">Yuan-Hsiang Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pin-Chi%20Lin"> Pin-Chi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Li-Der%20Jeng"> Li-Der Jeng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title="motion detection">motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title=" motion tracking"> motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20analysis" title=" trajectory analysis"> trajectory analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/13650/automatic-motion-trajectory-analysis-for-dual-human-interaction-using-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">548</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1855</span> Dynamic Background Updating for Lightweight Moving Object Detection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kelemewerk%20Destalem">Kelemewerk Destalem</a>, <a href="https://publications.waset.org/abstracts/search?q=Joongjae%20Cho"> Joongjae Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaeseong%20Lee"> Jaeseong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonhyuk%20Yoo"> Joonhyuk Yoo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of a histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title="background subtraction">background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20updating" title=" background updating"> background updating</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20time" title=" real time"> real time</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20weight%20algorithm" title=" light weight algorithm"> light weight algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20difference" title=" temporal difference"> temporal difference</a> </p> <a href="https://publications.waset.org/abstracts/31063/dynamic-background-updating-for-lightweight-moving-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31063.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1854</span> Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=C-OTDR-system" title="C-OTDR-system">C-OTDR-system</a>, <a href="https://publications.waset.org/abstracts/search?q=co-processing%20of%20signaling%20parameters" title=" co-processing of signaling parameters"> co-processing of signaling parameters</a>, <a href="https://publications.waset.org/abstracts/search?q=high-speed%20objects%20localization" title="high-speed objects localization">high-speed objects localization</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20monitoring%20systems" title=" multichannel monitoring systems "> multichannel monitoring systems </a> </p> <a href="https://publications.waset.org/abstracts/32580/instant-location-detection-of-objects-moving-at-high-speed-in-c-otdr-monitoring-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1853</span> Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sergio%20Tovar-P%C3%A9rez">Sergio Tovar-Pérez</a>, <a href="https://publications.waset.org/abstracts/search?q=Sebastian%20T%C3%B6pfer"> Sebastian Töpfer</a>, <a href="https://publications.waset.org/abstracts/search?q=Markus%20Gr%C3%A4fe"> Markus Gräfe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quasi-parallel%20phase%20shifting" title="quasi-parallel phase shifting">quasi-parallel phase shifting</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20imaging" title=" quantum imaging"> quantum imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20holography" title=" quantum holography"> quantum holography</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20metrology" title=" quantum metrology"> quantum metrology</a> </p> <a href="https://publications.waset.org/abstracts/156520/enhanced-acquisition-time-of-a-quantum-holography-scheme-within-a-nonlinear-interferometer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1852</span> Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Harraj%20Abdeslam">El Harraj Abdeslam</a>, <a href="https://publications.waset.org/abstracts/search?q=Raissouni%20Naoufal"> Raissouni Naoufal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title="video surveillance">video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title=" background subtraction"> background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20limited%20histogram%20equalization" title=" contrast limited histogram equalization"> contrast limited histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=illumination%20invariance" title=" illumination invariance"> illumination invariance</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=behavior%20understanding" title=" behavior understanding"> behavior understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20scenes" title=" dynamic scenes"> dynamic scenes</a> </p> <a href="https://publications.waset.org/abstracts/27499/toward-indoor-and-outdoor-surveillance-using-an-improved-fast-background-subtraction-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1851</span> Numerical Simulation of a Three-Dimensional Framework under the Action of Two-Dimensional Moving Loads</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jia-Jang%20Wu">Jia-Jang Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this research is to develop a general technique so that one may predict the dynamic behaviour of a three-dimensional scale crane model subjected to time-dependent moving point forces by means of conventional finite element computer packages. To this end, the whole scale crane model is divided into two parts: the stationary framework and the moving substructure. In such a case, the dynamic responses of a scale crane model can be predicted from the forced vibration responses of the stationary framework due to actions of the four time-dependent moving point forces induced by the moving substructure. Since the magnitudes and positions of the moving point forces are dependent on the relative positions between the trolley, moving substructure and the stationary framework, it can be found from the numerical results that the time histories for the moving speeds of the moving substructure and the trolley are the key factors affecting the dynamic responses of the scale crane model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=moving%20load" title="moving load">moving load</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20substructure" title=" moving substructure"> moving substructure</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20responses" title=" dynamic responses"> dynamic responses</a>, <a href="https://publications.waset.org/abstracts/search?q=forced%20vibration%20responses" title=" forced vibration responses"> forced vibration responses</a> </p> <a href="https://publications.waset.org/abstracts/37626/numerical-simulation-of-a-three-dimensional-framework-under-the-action-of-two-dimensional-moving-loads" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37626.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1850</span> Vibration Imaging Method for Vibrating Objects with Translation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Shimasaki">Kohei Shimasaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomoaki%20Okamura"> Tomoaki Okamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Idaku%20Ishii"> Idaku Ishii</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a vibration imaging method for high frame rate (HFR)-video-based localization of vibrating objects with large translations. When the ratio of the translation speed of a target to its vibration frequency is large, obtaining its frequency response in image intensities becomes difficult because one or no waves are observable at the same pixel. Our method can precisely localize moving objects with vibration by virtually translating multiple image sequences for pixel-level short-time Fourier transform to observe multiple waves at the same pixel. The effectiveness of the proposed method is demonstrated by analyzing several HFR videos of flying insects in real scenarios. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HFR%20video%20analysis" title="HFR video analysis">HFR video analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel-level%20vibration%20source%20localization" title=" pixel-level vibration source localization"> pixel-level vibration source localization</a>, <a href="https://publications.waset.org/abstracts/search?q=short-time%20Fourier%20transform" title=" short-time Fourier transform"> short-time Fourier transform</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20translation" title=" virtual translation"> virtual translation</a> </p> <a href="https://publications.waset.org/abstracts/160120/vibration-imaging-method-for-vibrating-objects-with-translation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160120.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1849</span> Design and Implementation of a Control System for a Walking Robot with Color Sensing and Line following Using PIC and ATMEL Microcontrollers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ibraheem%20K.%20Ibraheem">Ibraheem K. Ibraheem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this research is to design and implement line-tracking mobile robot. The robot must follow a line drawn on the floor with different color, avoids hitting moving object like another moving robot or walking people and achieves color sensing. The control system reacts by controlling each of the motors to keep the tracking sensor over the middle of the line. Proximity sensors used to avoid hitting moving objects that may pass in front of the robot. The programs have been written using micro c instructions, then converted into PIC16F887 ATmega48/88/168 microcontrollers counterparts. Practical simulations show that the walking robot accurately achieves line following action and exactly recognizes the colors and avoids any obstacle in front of it. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20sensing" title="color sensing">color sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=H-bridge" title=" H-bridge"> H-bridge</a>, <a href="https://publications.waset.org/abstracts/search?q=line%20following" title=" line following"> line following</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20robot" title=" mobile robot"> mobile robot</a>, <a href="https://publications.waset.org/abstracts/search?q=PIC%20microcontroller" title=" PIC microcontroller"> PIC microcontroller</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20avoidance" title=" obstacle avoidance"> obstacle avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=phototransistor" title=" phototransistor"> phototransistor</a> </p> <a href="https://publications.waset.org/abstracts/7881/design-and-implementation-of-a-control-system-for-a-walking-robot-with-color-sensing-and-line-following-using-pic-and-atmel-microcontrollers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7881.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1848</span> Movies and Dynamic Mathematical Objects on Trigonometry for Mobile Phones</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kazuhisa%20Takagi">Kazuhisa Takagi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper is about movies and dynamic objects for mobile phones. Dynamic objects are the software programmed by JavaScript. They consist of geometric figures and work on HTML5-compliant browsers. Mobile phones are very popular among teenagers. They like watching movies and playing games on them. So, mathematics movies and dynamic objects would enhance teaching and learning processes. In the movies, manga characters speak with artificially synchronized voices. They teach trigonometry together with dynamic mathematical objects. Many movies are created. They are Windows Media files or MP4 movies. These movies and dynamic objects are not only used in the classroom but also distributed to students. By watching movies, students can study trigonometry before or after class. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20mathematical%20object" title="dynamic mathematical object">dynamic mathematical object</a>, <a href="https://publications.waset.org/abstracts/search?q=javascript" title=" javascript"> javascript</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20drive" title=" google drive"> google drive</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20jet" title=" transfer jet"> transfer jet</a> </p> <a href="https://publications.waset.org/abstracts/67497/movies-and-dynamic-mathematical-objects-on-trigonometry-for-mobile-phones" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1847</span> 3D Objects Indexing Using Spherical Harmonic for Optimum Measurement Similarity </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Hellam">S. Hellam</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Oulahrir"> Y. Oulahrir</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20El%20Mounchid"> F. El Mounchid</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sadiq"> A. Sadiq</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Mbarki"> S. Mbarki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a method for three-dimensional (3-D)-model indexing based on defining a new descriptor, which we call new descriptor using spherical harmonics. The purpose of the method is to minimize, the processing time on the database of objects models and the searching time of similar objects to request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be used in the search for similar objects in the database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20indexation" title="3D indexation">3D indexation</a>, <a href="https://publications.waset.org/abstracts/search?q=spherical%20harmonic" title=" spherical harmonic"> spherical harmonic</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20of%203D%20objects" title=" similarity of 3D objects"> similarity of 3D objects</a>, <a href="https://publications.waset.org/abstracts/search?q=measurement%20similarity" title=" measurement similarity"> measurement similarity</a> </p> <a href="https://publications.waset.org/abstracts/14277/3d-objects-indexing-using-spherical-harmonic-for-optimum-measurement-similarity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14277.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1846</span> Moving Object Detection Using Histogram of Uniformly Oriented Gradient</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Siang%20Su"> Yu-Siang Su</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chung"> Pau-Choo Chung</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title="moving object detection">moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20of%20oriented%20gradient" title=" histogram of oriented gradient"> histogram of oriented gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20of%20uniformly-oriented%20gradient" title=" histogram of uniformly-oriented gradient"> histogram of uniformly-oriented gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20support%20vector%20machine" title=" linear support vector machine"> linear support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/62854/moving-object-detection-using-histogram-of-uniformly-oriented-gradient" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">594</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1845</span> Contrastive Learning for Unsupervised Object Segmentation in Sequential Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tian%20Zhang">Tian Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unsupervised object segmentation aims at segmenting objects in sequential images and obtaining the mask of each object without any manual intervention. Unsupervised segmentation remains a challenging task due to the lack of prior knowledge about these objects. Previous methods often require manually specifying the action of each object, which is often difficult to obtain. Instead, this paper does not need action information of objects and automatically learns the actions and relations among objects from the structured environment. To obtain the object segmentation of sequential images, the relationships between objects and images are extracted to infer the action and interaction of objects based on the multi-head attention mechanism. Three types of objects’ relationships in the object segmentation task are proposed: the relationship between objects in the same frame, the relationship between objects in two frames, and the relationship between objects and historical information. Based on these relationships, the proposed model (1) is effective in multiple objects segmentation tasks, (2) just needs images as input, and (3) produces better segmentation results as more relationships are considered. The experimental results on multiple datasets show that this paper’s method achieves state-of-art performance. The quantitative and qualitative analyses of the result are conducted. The proposed method could be easily extended to other similar applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20object%20segmentation" title="unsupervised object segmentation">unsupervised object segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=contrastive%20learning" title=" contrastive learning"> contrastive learning</a>, <a href="https://publications.waset.org/abstracts/search?q=structured%20environment" title=" structured environment"> structured environment</a> </p> <a href="https://publications.waset.org/abstracts/148401/contrastive-learning-for-unsupervised-object-segmentation-in-sequential-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148401.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">109</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1844</span> Antenna for Energy Harvesting in Wireless Connected Objects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nizar%20Sakli">Nizar Sakli</a>, <a href="https://publications.waset.org/abstracts/search?q=Chayma%20Bahar"> Chayma Bahar</a>, <a href="https://publications.waset.org/abstracts/search?q=Chokri%20Baccouch"> Chokri Baccouch</a>, <a href="https://publications.waset.org/abstracts/search?q=Hedi%20Sakli"> Hedi Sakli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> If connected objects multiply, they are becoming a challenge in more than one way. In particular by their consumption and their supply of electricity. A large part of the new generations of connected objects will only be able to develop if it is possible to make them entirely autonomous in terms of energy. Some manufacturers are therefore developing products capable of recovering energy from their environment. Vital solutions in certain contexts, such as the medical industry. Energy recovery from the environment is a reliable solution to solve the problem of powering wireless connected objects. This paper presents and study a optically transparent solar patch antenna in frequency band of 2.4 GHz for connected objects in the future standard 5G for energy harvesting and RF transmission. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=antenna" title="antenna">antenna</a>, <a href="https://publications.waset.org/abstracts/search?q=IoT" title=" IoT"> IoT</a>, <a href="https://publications.waset.org/abstracts/search?q=solar%20cell" title=" solar cell"> solar cell</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20communications" title=" wireless communications"> wireless communications</a> </p> <a href="https://publications.waset.org/abstracts/129453/antenna-for-energy-harvesting-in-wireless-connected-objects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129453.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1843</span> Development of 3D Laser Scanner for Robot Navigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Emre%20%C3%96zt%C3%BCrk">Ali Emre Öztürk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ergun%20Ercelebi"> Ergun Ercelebi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autonomous robotic systems needs an equipment like a human eye for their movement. Robotic camera systems, distance sensors and 3D laser scanners have been used in the literature. In this study a 3D laser scanner has been produced for those autonomous robotic systems. In general 3D laser scanners are using 2 dimension laser range finders that are moving on one-axis (1D) to generate the model. In this study, the model has been obtained by a one-dimensional laser range finder that is moving in two –axis (2D) and because of this the laser scanner has been produced cheaper. Furthermore for the laser scanner a motor driver, an embedded system control board has been used and at the same time a user interface card has been used to make the communication between those cards and computer. Due to this laser scanner, the density of the objects, the distance between the objects and the necessary path ways for the robot can be calculated. The data collected by the laser scanner system is converted in to cartesian coordinates to be modeled in AutoCAD program. This study shows also the synchronization between the computer user interface, AutoCAD and the embedded systems. As a result it makes the solution cheaper for such systems. The scanning results are enough for an autonomous robot but the scan cycle time should be developed. This study makes also contribution for further studies between the hardware and software needs since it has a powerful performance and a low cost. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20laser%20scanner" title="3D laser scanner">3D laser scanner</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20system" title=" embedded system"> embedded system</a>, <a href="https://publications.waset.org/abstracts/search?q=1D%20laser%20range%20finder" title=" 1D laser range finder"> 1D laser range finder</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20model" title=" 3D model"> 3D model</a> </p> <a href="https://publications.waset.org/abstracts/3355/development-of-3d-laser-scanner-for-robot-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3355.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1842</span> The Contribution of Lower Visual Channels and Evolutionary Origin of the Tunnel Effect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shai%20Gabay">Shai Gabay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The tunnel effect describes the phenomenon where a moving object seems to persist even when temporarily hidden from view. Numerous studies indicate that humans, infants, and nonhuman primates possess object persistence, relying on spatiotemporal cues to track objects that are dynamically occluded. While this ability is associated with neural activity in the cerebral neocortex of humans and mammals, the role of subcortical mechanisms remains ambiguous. In our current investigation, we explore the functional contribution of monocular aspects of the visual system, predominantly subcortical, to the representation of occluded objects. This is achieved by manipulating whether the reappearance of an object occurs in the same or different eye from its disappearance. Additionally, we employ Archerfish, renowned for their precision in dislodging insect prey with water jets, as a phylogenetic model to probe the evolutionary origins of the tunnel effect. Our findings reveal the active involvement of subcortical structures in the mental representation of occluded objects, a process evident even in species that do not possess cortical tissue. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=archerfish" title="archerfish">archerfish</a>, <a href="https://publications.waset.org/abstracts/search?q=tunnel%20effect" title=" tunnel effect"> tunnel effect</a>, <a href="https://publications.waset.org/abstracts/search?q=mental%20representations" title=" mental representations"> mental representations</a>, <a href="https://publications.waset.org/abstracts/search?q=monocular%20channels" title=" monocular channels"> monocular channels</a>, <a href="https://publications.waset.org/abstracts/search?q=subcortical%20structures" title=" subcortical structures"> subcortical structures</a> </p> <a href="https://publications.waset.org/abstracts/185847/the-contribution-of-lower-visual-channels-and-evolutionary-origin-of-the-tunnel-effect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">45</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1841</span> Implementation of a Serializer to Represent PHP Objects in the Extensible Markup Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lidia%20N.%20Hern%C3%A1ndez-Pi%C3%B1a">Lidia N. Hernández-Piña</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20R.%20Jaimez-Gonz%C3%A1lez"> Carlos R. Jaimez-González</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Interoperability in distributed systems is an important feature that refers to the communication of two applications written in different programming languages. This paper presents a serializer and a de-serializer of PHP objects to and from XML, which is an independent library written in the PHP programming language. The XML generated by this serializer is independent of the programming language, and can be used by other existing Web Objects in XML (WOX) serializers and de-serializers, which allow interoperability with other object-oriented programming languages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interoperability" title="interoperability">interoperability</a>, <a href="https://publications.waset.org/abstracts/search?q=PHP%20object%20serialization" title=" PHP object serialization"> PHP object serialization</a>, <a href="https://publications.waset.org/abstracts/search?q=PHP%20to%20XML" title=" PHP to XML"> PHP to XML</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20objects%20in%20XML" title=" web objects in XML"> web objects in XML</a>, <a href="https://publications.waset.org/abstracts/search?q=WOX" title=" WOX"> WOX</a> </p> <a href="https://publications.waset.org/abstracts/79264/implementation-of-a-serializer-to-represent-php-objects-in-the-extensible-markup-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">236</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1840</span> Virtual Reality Application for Neurorehabilitation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Vargas-Herrera">Daniel Vargas-Herrera</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivette%20Caldelas"> Ivette Caldelas</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20Brambila-Paz"> Fernando Brambila-Paz</a>, <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20Montufar-Chaveznava"> Rodrigo Montufar-Chaveznava</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a virtual reality application for neurorehabilitation. This application was developed using the Unity SDK integrating the Oculus Rift and Leap Motion devices. Essentially, it consists of three stages according to the kind of rehabilitation to carry on: ocular rehabilitation, head/neck rehabilitation, and eye-hand coordination. We build three scenes for each task; for ocular and head/neck rehabilitation, there are different objects moving in the field of view and extended field of view of the user according to some patterns relative to the therapy. In the third stage the user must try to touch with the hand some objects guided by its view. We report the primer results of the use of the application with healthy people. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title="virtual reality">virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20technologies" title=" interactive technologies"> interactive technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=neurorehabilitation" title=" neurorehabilitation"> neurorehabilitation</a> </p> <a href="https://publications.waset.org/abstracts/55918/virtual-reality-application-for-neurorehabilitation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55918.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1839</span> Design and Manufacture of Non-Contact Moving Load for Experimental Analysis of Beams</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Firooz%20Bakhtiari-Nejad">Firooz Bakhtiari-Nejad</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamidreza%20Rostami"> Hamidreza Rostami</a>, <a href="https://publications.waset.org/abstracts/search?q=Meysam%20Mirzaee"> Meysam Mirzaee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mona%20Zandbaf"> Mona Zandbaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dynamic tests are an important step of the design of engineering structures, because the accuracy of predictions of theoretical–numerical procedures can be assessed. In experimental test of moving loads that is one of the major research topics, the load is modeled as a simple moving mass or a small vehicle. This paper deals with the applicability of Non-Contact Moving Load (NML) for vibration analysis. For this purpose, an experimental set-up is designed to generate the different types of NML including constant and harmonic. The proposed method relies on pressurized air which is useful, especially when dealing with fragile or sensitive structures. To demonstrate the performance of this system, the set-up is employed for a modal analysis of a beam and detecting crack of the beam. The obtained results indicate that the experimental set-up for NML can be an attractive alternative to the moving load problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=experimental%20analysis" title="experimental analysis">experimental analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20load" title=" moving load"> moving load</a>, <a href="https://publications.waset.org/abstracts/search?q=non-contact%20excitation" title=" non-contact excitation"> non-contact excitation</a>, <a href="https://publications.waset.org/abstracts/search?q=materials%20engineering" title=" materials engineering"> materials engineering</a> </p> <a href="https://publications.waset.org/abstracts/2510/design-and-manufacture-of-non-contact-moving-load-for-experimental-analysis-of-beams" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2510.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">465</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1838</span> Detection of Image Blur and Its Restoration for Image Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image restoration in the process of communication is one of the emerging fields in the image processing. The motion analysis processing is the simplest case to detect motion in an image. Applications of motion analysis widely spread in many areas such as surveillance, remote sensing, film industry, navigation of autonomous vehicles, etc. The scene may contain multiple moving objects, by using motion analysis techniques the blur caused by the movement of the objects can be enhanced by filling-in occluded regions and reconstruction of transparent objects, and it also removes the motion blurring. This paper presents the design and comparison of various motion detection and enhancement filters. Median filter, Linear image deconvolution, Inverse filter, Pseudoinverse filter, Wiener filter, Lucy Richardson filter and Blind deconvolution filters are used to remove the blur. In this work, we have considered different types and different amount of blur for the analysis. Mean Square Error (MSE) and Peak Signal to Noise Ration (PSNR) are used to evaluate the performance of the filters. The designed system has been implemented in Matlab software and tested for synthetic and real-time images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20analysis" title=" motion analysis"> motion analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title=" motion detection"> motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20estimation" title=" motion estimation"> motion estimation</a> </p> <a href="https://publications.waset.org/abstracts/59485/detection-of-image-blur-and-its-restoration-for-image-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">287</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=62">62</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=63">63</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=moving%20objects&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10