CINXE.COM

Search results for: 3D object image

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: 3D object image</title> <meta name="description" content="Search results for: 3D object image"> <meta name="keywords" content="3D object image"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="3D object image" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="3D object image"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3759</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: 3D object image</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3759</span> Facility Detection from Image Using Mathematical Morphology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=In-Geun%20Lim">In-Geun Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Woong%20Ra"> Sung-Woong Ra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As high resolution satellite images can be used, lots of studies are carried out for exploiting these images in various fields. This paper proposes the method based on mathematical morphology for extracting the ‘horse's hoof shaped object’. This proposed method can make an automatic object detection system to track the meaningful object in a large satellite image rapidly. Mathematical morphology process can apply in binary image, so this method is very simple. Therefore this method can easily extract the ‘horse's hoof shaped object’ from any images which have indistinct edges of the tracking object and have different image qualities depending on filming location, filming time, and filming environment. Using the proposed method by which ‘horse's hoof shaped object’ can be rapidly extracted, the performance of the automatic object detection system can be improved dramatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facility%20detection" title="facility detection">facility detection</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title=" satellite image"> satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=object" title=" object"> object</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20morphology" title=" mathematical morphology"> mathematical morphology</a> </p> <a href="https://publications.waset.org/abstracts/67611/facility-detection-from-image-using-mathematical-morphology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67611.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">381</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3758</span> 6D Posture Estimation of Road Vehicles from Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoshimoto%20Kurihara">Yoshimoto Kurihara</a>, <a href="https://publications.waset.org/abstracts/search?q=Tad%20Gonsalves"> Tad Gonsalves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=6D%20posture%20estimation" title="6D posture estimation">6D posture estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a> </p> <a href="https://publications.waset.org/abstracts/138449/6d-posture-estimation-of-road-vehicles-from-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3757</span> Deep Learning Application for Object Image Recognition and Robot Automatic Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiuh-Jer%20Huang">Shiuh-Jer Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Zon%20Yan"> Chen-Zon Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20K.%20Huang"> C. K. Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Chien%20Ting"> Chun-Chien Ting</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv2" title=" YOLOv2"> YOLOv2</a>, <a href="https://publications.waset.org/abstracts/search?q=7A6%20series%20manipulator" title=" 7A6 series manipulator"> 7A6 series manipulator</a> </p> <a href="https://publications.waset.org/abstracts/110468/deep-learning-application-for-object-image-recognition-and-robot-automatic-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110468.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3756</span> A Review on Artificial Neural Networks in Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Afsharipoor">B. Afsharipoor</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Nazemi"> E. Nazemi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artificial neural networks (ANNs) are powerful tool for prediction which can be trained based on a set of examples and thus, it would be useful for nonlinear image processing. The present paper reviews several paper regarding applications of ANN in image processing to shed the light on advantage and disadvantage of ANNs in this field. Different steps in the image processing chain including pre-processing, enhancement, segmentation, object recognition, image understanding and optimization by using ANN are summarized. Furthermore, results on using multi artificial neural networks are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20understanding" title=" image understanding"> image understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=MANN" title=" MANN"> MANN</a> </p> <a href="https://publications.waset.org/abstracts/36843/a-review-on-artificial-neural-networks-in-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36843.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3755</span> Detect Circles in Image: Using Statistical Image Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fathi%20M.%20O.%20Hamed">Fathi M. O. Hamed</a>, <a href="https://publications.waset.org/abstracts/search?q=Salma%20F.%20Elkofhaifee"> Salma F. Elkofhaifee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this work is to detect geometrical shape objects in an image. In this paper, the object is considered to be as a circle shape. The identification requires find three characteristics, which are number, size, and location of the object. To achieve the goal of this work, this paper presents an algorithm that combines from some of statistical approaches and image analysis techniques. This algorithm has been implemented to arrive at the major objectives in this paper. The algorithm has been evaluated by using simulated data, and yields good results, and then it has been applied to real data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20filter" title=" median filter"> median filter</a>, <a href="https://publications.waset.org/abstracts/search?q=projection" title=" projection"> projection</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-space" title=" scale-space"> scale-space</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=threshold" title=" threshold"> threshold</a> </p> <a href="https://publications.waset.org/abstracts/37141/detect-circles-in-image-using-statistical-image-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37141.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3754</span> Development of Intelligent Construction Management System Using Web-Camera Image and 3D Object Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyeon-Seung%20Kim">Hyeon-Seung Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Bit-Na%20Cho"> Bit-Na Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Tae-Woon%20Jeong"> Tae-Woon Jeong</a>, <a href="https://publications.waset.org/abstracts/search?q=Soo-Young%20Yoon"> Soo-Young Yoon</a>, <a href="https://publications.waset.org/abstracts/search?q=Leen-Seok%20Kang"> Leen-Seok Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, a construction project has been large in the size and complicated in the site work. The web-cameras are used to manage the construction site of such a large construction project. They can be used for monitoring the construction schedule as compared to the actual work image of the planned work schedule. Specially, because the 4D CAD system that the construction appearance is continually simulated in a 3D CAD object by work schedule is widely applied to the construction project, the comparison system between the real image of actual work appearance by web-camera and the simulated image of planned work appearance by 3D CAD object can be an intelligent construction schedule management system (ICON). The delayed activities comparing with the planned schedule can be simulated by red color in the ICON as a virtual reality object. This study developed the ICON and it was verified in a real bridge construction project in Korea. To verify the developed system, a web-camera was installed and operated in a case project for a month. Because the angle and zooming of the web-camera can be operated by Internet, a project manager can easily monitor and assume the corrective action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=4D%20CAD" title="4D CAD">4D CAD</a>, <a href="https://publications.waset.org/abstracts/search?q=web-camera" title=" web-camera"> web-camera</a>, <a href="https://publications.waset.org/abstracts/search?q=ICON%20%28intelligent%20construction%20schedule%20management%20system%29" title=" ICON (intelligent construction schedule management system)"> ICON (intelligent construction schedule management system)</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20object%20image" title=" 3D object image"> 3D object image</a> </p> <a href="https://publications.waset.org/abstracts/18621/development-of-intelligent-construction-management-system-using-web-camera-image-and-3d-object-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18621.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">507</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3753</span> Mathematical Reconstruction of an Object Image Using X-Ray Interferometric Fourier Holography Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20K.%20Balyan">M. K. Balyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main principles of X-ray Fourier interferometric holography method are discussed. The object image is reconstructed by the mathematical method of Fourier transformation. The three methods are presented &ndash; method of approximation, iteration method and step by step method. As an example the complex amplitude transmission coefficient reconstruction of a beryllium wire is considered. The results reconstructed by three presented methods are compared. The best results are obtained by means of step by step method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamical%20diffraction" title="dynamical diffraction">dynamical diffraction</a>, <a href="https://publications.waset.org/abstracts/search?q=hologram" title=" hologram"> hologram</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20image" title=" object image"> object image</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20holography" title=" X-ray holography"> X-ray holography</a> </p> <a href="https://publications.waset.org/abstracts/56283/mathematical-reconstruction-of-an-object-image-using-x-ray-interferometric-fourier-holography-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56283.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">394</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3752</span> Vehicular Speed Detection Camera System Using Video Stream</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20A.%20Anser%20Pasha">C. A. Anser Pasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/45316/vehicular-speed-detection-camera-system-using-video-stream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45316.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3751</span> Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Waqqas-ur-Rehman%20Butt">Waqqas-ur-Rehman Butt</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Servin"> Martin Servin</a>, <a href="https://publications.waset.org/abstracts/search?q=Marion%20Pause"> Marion Pause</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=illumination%20equalization" title=" illumination equalization"> illumination equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20filtering" title=" shadow filtering"> shadow filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a> </p> <a href="https://publications.waset.org/abstracts/77157/object-detection-in-digital-images-under-non-standardized-conditions-using-illumination-and-shadow-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77157.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">216</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3750</span> Urban Land Cover from GF-2 Satellite Images Using Object Based and Neural Network Classifications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lamyaa%20Gamal%20El-Deen%20Taha">Lamyaa Gamal El-Deen Taha</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashraf%20Sharawi"> Ashraf Sharawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> China launched satellite GF-2 in 2014. This study deals with comparing nearest neighbor object-based classification and neural network classification methods for classification of the fused GF-2 image. Firstly, rectification of GF-2 image was performed. Secondly, a comparison between nearest neighbor object-based classification and neural network classification for classification of fused GF-2 was performed. Thirdly, the overall accuracy of classification and kappa index were calculated. Results indicate that nearest neighbor object-based classification is better than neural network classification for urban mapping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GF-2%20images" title="GF-2 images">GF-2 images</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction-rectification" title=" feature extraction-rectification"> feature extraction-rectification</a>, <a href="https://publications.waset.org/abstracts/search?q=nearest%20neighbour%20object%20based%20classification" title=" nearest neighbour object based classification"> nearest neighbour object based classification</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20algorithms" title=" segmentation algorithms"> segmentation algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network%20classification" title=" neural network classification"> neural network classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multilayer%20perceptron" title=" multilayer perceptron"> multilayer perceptron</a> </p> <a href="https://publications.waset.org/abstracts/84243/urban-land-cover-from-gf-2-satellite-images-using-object-based-and-neural-network-classifications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3749</span> Rehabilitation of the Blind Using Sono-Visualization Tool</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashwani%20Kumar">Ashwani Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In human beings, eyes play a vital role. A very less research has been done for rehabilitation of blindness for the blind people. This paper discusses the work that helps blind people for recognizing the basic shapes of the objects like circle, square, triangle, horizontal lines, vertical lines, diagonal lines and the wave forms like sinusoidal, square, triangular etc. This is largely achieved by using a digital camera, which is used to capture the visual information present in front of the blind person and a software program, which achieves the image processing operations, and finally the processed image is converted into sound. After the sound generation process, the generated sound is fed to the blind person through headphones for visualizing the imaginary image of the object. For visualizing the imaginary image of the object, it needs to train the blind person. Various training process methods had been applied for recognizing the object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel" title=" pixel"> pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=pitch" title=" pitch"> pitch</a>, <a href="https://publications.waset.org/abstracts/search?q=loudness" title=" loudness"> loudness</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20generation" title=" sound generation"> sound generation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=brightness" title=" brightness"> brightness</a> </p> <a href="https://publications.waset.org/abstracts/14606/rehabilitation-of-the-blind-using-sono-visualization-tool" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3748</span> Improvement of Brain Tumors Detection Using Markers and Boundaries Transform </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mommen%20A.%20Alkhir"> Mommen A. Alkhir</a>, <a href="https://publications.waset.org/abstracts/search?q=Amel%20S.%20Algaddal"> Amel S. Algaddal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This was experimental study conducted to study segmentation of brain in MRI images using edge detection and morphology filters. For brain MRI images each film scanned using digitizer scanner then treated by using image processing program (MatLab), where the segmentation was studied. The scanned image was saved in a TIFF file format to preserve the quality of the image. Brain tissue can be easily detected in MRI image if the object has sufficient contrast from the background. We use edge detection and basic morphology tools to detect a brain. The segmentation of MRI images steps using detection and morphology filters were image reading, detection entire brain, dilation of the image, filling interior gaps inside the image, removal connected objects on borders and smoothen the object (brain). The results of this study were that it showed an alternate method for displaying the segmented object would be to place an outline around the segmented brain. Those filters approaches can help in removal of unwanted background information and increase diagnostic information of Brain MRI. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=improvement" title="improvement">improvement</a>, <a href="https://publications.waset.org/abstracts/search?q=brain" title=" brain"> brain</a>, <a href="https://publications.waset.org/abstracts/search?q=matlab" title=" matlab"> matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=markers" title=" markers"> markers</a>, <a href="https://publications.waset.org/abstracts/search?q=boundaries" title=" boundaries"> boundaries</a> </p> <a href="https://publications.waset.org/abstracts/31036/improvement-of-brain-tumors-detection-using-markers-and-boundaries-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31036.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3747</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3746</span> Robust and Real-Time Traffic Counting System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hossam%20M.%20Moftah">Hossam M. Moftah</a>, <a href="https://publications.waset.org/abstracts/search?q=Aboul%20Ella%20Hassanien"> Aboul Ella Hassanien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the recent years the importance of automatic traffic control has increased due to the traffic jams problem especially in big cities for signal control and efficient traffic management. Traffic counting as a kind of traffic control is important to know the road traffic density in real time. This paper presents a fast and robust traffic counting system using different image processing techniques. The proposed system is composed of the following four fundamental building phases: image acquisition, pre-processing, object detection, and finally counting the connected objects. The object detection phase is comprised of the following five steps: subtracting the background, converting the image to binary, closing gaps and connecting nearby blobs, image smoothing to remove noises and very small objects, and detecting the connected objects. Experimental results show the great success of the proposed approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20counting" title="traffic counting">traffic counting</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20management" title=" traffic management"> traffic management</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/43835/robust-and-real-time-traffic-counting-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43835.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3745</span> User Authentication Using Graphical Password with Sound Signature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Devi%20Srinivas">Devi Srinivas</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Sindhuja"> K. Sindhuja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents architecture to improve surveillance applications based on the usage of the service oriented paradigm, with smart phones as user terminals, allowing application dynamic composition and increasing the flexibility of the system. According to the result of moving object detection research on video sequences, the movement of the people is tracked using video surveillance. The moving object is identified using the image subtraction method. The background image is subtracted from the foreground image, from that the moving object is derived. So the Background subtraction algorithm and the threshold value is calculated to find the moving image by using background subtraction algorithm the moving frame is identified. Then, by the threshold value the movement of the frame is identified and tracked. Hence, the movement of the object is identified accurately. This paper deals with low-cost intelligent mobile phone-based wireless video surveillance solution using moving object recognition technology. The proposed solution can be useful in various security systems and environmental surveillance. The fundamental rule of moving object detecting is given in the paper, then, a self-adaptive background representation that can update automatically and timely to adapt to the slow and slight changes of normal surroundings is detailed. While the subtraction of the present captured image and the background reaches a certain threshold, a moving object is measured to be in the current view, and the mobile phone will automatically notify the central control unit or the user through SMS (Short Message System). The main advantage of this system is when an unknown image is captured by the system it will alert the user automatically by sending an SMS to user’s mobile. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=security" title="security">security</a>, <a href="https://publications.waset.org/abstracts/search?q=graphical%20password" title=" graphical password"> graphical password</a>, <a href="https://publications.waset.org/abstracts/search?q=persuasive%20cued%20click%20points" title=" persuasive cued click points"> persuasive cued click points</a> </p> <a href="https://publications.waset.org/abstracts/23794/user-authentication-using-graphical-password-with-sound-signature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23794.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">537</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3744</span> Retrieving Similar Segmented Objects Using Motion Descriptors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Konstantinos%20C.%20Kartsakalis">Konstantinos C. Kartsakalis</a>, <a href="https://publications.waset.org/abstracts/search?q=Angeliki%20Skoura"> Angeliki Skoura</a>, <a href="https://publications.waset.org/abstracts/search?q=Vasileios%20Megalooikonomou"> Vasileios Megalooikonomou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The fuzzy composition of objects depicted in images acquired through MR imaging or the use of bio-scanners has often been a point of controversy for field experts attempting to effectively delineate between the visualized objects. Modern approaches in medical image segmentation tend to consider fuzziness as a characteristic and inherent feature of the depicted object, instead of an undesirable trait. In this paper, a novel technique for efficient image retrieval in the context of images in which segmented objects are either crisp or fuzzily bounded is presented. Moreover, the proposed method is applied in the case of multiple, even conflicting, segmentations from field experts. Experimental results demonstrate the efficiency of the suggested method in retrieving similar objects from the aforementioned categories while taking into account the fuzzy nature of the depicted data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20object" title="fuzzy object">fuzzy object</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20image%20segmentation" title=" fuzzy image segmentation"> fuzzy image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20descriptors" title=" motion descriptors"> motion descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20imaging" title=" MRI imaging"> MRI imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval" title=" object-based image retrieval"> object-based image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/22736/retrieving-similar-segmented-objects-using-motion-descriptors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22736.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3743</span> Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sinha">Ankit Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Soham%20Banerjee"> Soham Banerjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Pratik%20Chattopadhyay"> Pratik Chattopadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retail%20stores" title="retail stores">retail stores</a>, <a href="https://publications.waset.org/abstracts/search?q=faster-RCNN" title=" faster-RCNN"> faster-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20localization" title=" object localization"> object localization</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-18" title=" ResNet-18"> ResNet-18</a>, <a href="https://publications.waset.org/abstracts/search?q=triplet%20loss" title=" triplet loss"> triplet loss</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=product%20recognition" title=" product recognition"> product recognition</a> </p> <a href="https://publications.waset.org/abstracts/153836/effective-stacking-of-deep-neural-models-for-automated-object-recognition-in-retail-stores" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3742</span> Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Wei">Hui Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Dong"> Zheng Dong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20model" title="biological model">biological model</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20neural%20network" title=" multi-layer neural network"> multi-layer neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a> </p> <a href="https://publications.waset.org/abstracts/25221/local-image-features-emerging-from-brain-inspired-multi-layer-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">542</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3741</span> Object Tracking in Motion Blurred Images with Adaptive Mean Shift and Wavelet Feature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iman%20Iraei">Iman Iraei</a>, <a href="https://publications.waset.org/abstracts/search?q=Mina%20Sharifi"> Mina Sharifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But, the problem is that mean shift could not track the selected object accurately in blurred scenes. So, for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature, we should use Harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mean%20shift" title="mean shift">mean shift</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=blur%20extent" title=" blur extent"> blur extent</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20transform" title=" wavelet transform"> wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20blur" title=" motion blur"> motion blur</a> </p> <a href="https://publications.waset.org/abstracts/81408/object-tracking-in-motion-blurred-images-with-adaptive-mean-shift-and-wavelet-feature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3740</span> Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fidel%20L%C3%B2pez%20Saca">Fidel Lòpez Saca</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Avil%C3%A9s-Cruz"> Carlos Avilés-Cruz</a>, <a href="https://publications.waset.org/abstracts/search?q=Miguel%20Magos-Rivera"> Miguel Magos-Rivera</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Antonio%20Lara-Ch%C3%A1vez"> José Antonio Lara-Chávez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor&rsquo;s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title="deep-learning">deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20identification" title=" image identification"> image identification</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20engineering." title=" industrial engineering."> industrial engineering.</a> </p> <a href="https://publications.waset.org/abstracts/126071/automatic-product-identification-based-on-deep-learning-theory-in-an-assembly-line" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126071.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3739</span> An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20S.%20Al-Rawi">Mohammed S. Al-Rawi</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Bastos"> J. Bastos</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Rodriguez"> J. Rodriguez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chebyshev%20polynomial" title="Chebyshev polynomial">Chebyshev polynomial</a>, <a href="https://publications.waset.org/abstracts/search?q=fourier%20transform" title=" fourier transform"> fourier transform</a>, <a href="https://publications.waset.org/abstracts/search?q=fast%20algorithms" title=" fast algorithms"> fast algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pseudo%20Zernike%20moments" title=" pseudo Zernike moments"> pseudo Zernike moments</a>, <a href="https://publications.waset.org/abstracts/search?q=Zernike%20moments" title=" Zernike moments"> Zernike moments</a> </p> <a href="https://publications.waset.org/abstracts/58226/an-accurate-computation-of-2d-zernike-moments-via-fast-fourier-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">265</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3738</span> Using Electrical Impedance Tomography to Control a Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shayan%20Rezvanigilkolaei">Shayan Rezvanigilkolaei</a>, <a href="https://publications.waset.org/abstracts/search?q=Shayesteh%20Vefaghnematollahi"> Shayesteh Vefaghnematollahi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electrical%20impedance%20tomography" title="electrical impedance tomography">electrical impedance tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=EIT" title=" EIT"> EIT</a>, <a href="https://publications.waset.org/abstracts/search?q=surgeon%20robot" title=" surgeon robot"> surgeon robot</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing%20of%20electrical%20impedance%20tomography" title=" image processing of electrical impedance tomography"> image processing of electrical impedance tomography</a> </p> <a href="https://publications.waset.org/abstracts/43250/using-electrical-impedance-tomography-to-control-a-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43250.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3737</span> Canonical Objects and Other Objects in Arabic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Safiah%20Ahmed%20Madkhali">Safiah Ahmed Madkhali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The grammatical relation object has not attracted the same attention in the literature as subject has. Where there is a clearly monotransitive verb such as kick, the criteria for identifying the grammatical relation may converge. However, the term object is also used to refer to phenomena that do not subsume all, or even most, of the recognized properties of the canonical object. Instances of such phenomena include non-canonical objects such as the ones in the so-called double-object construction i.e. the indirect object and the direct object as in (He bought his dog a new collar). In this paper, it is demonstrated how criteria of identifying the grammatical relation object that are found in the theoretical and typological literature can be applied to Arabic. Also, further language-specific criteria are here derived from the regularities of the canonical object in the language. The criteria established in this way are then applied to the non-canonical objects to demonstrate how far they conform to, or diverge from, the canonical object. Contrary to the claim that the direct object is more similar to the canonical object than is the indirect object, it was found that it is, in fact, the indirect object rather than the direct object that shares most of the aspects of the canonical object in monotransitive clauses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canonical%20objects" title="canonical objects">canonical objects</a>, <a href="https://publications.waset.org/abstracts/search?q=double-object%20constructions" title=" double-object constructions"> double-object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=cognate%20object%20constructions" title=" cognate object constructions"> cognate object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=non-canonical%20objects" title=" non-canonical objects"> non-canonical objects</a> </p> <a href="https://publications.waset.org/abstracts/141579/canonical-objects-and-other-objects-in-arabic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3736</span> Development of Ultrasounf Probe Holder for Automatic Scanning Asymmetric Reflector</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nabilah%20Ibrahim">Nabilah Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hafiz%20Mohd%20Zaini"> Hafiz Mohd Zaini</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Fatin%20Liyana%20Mutalib"> Wan Fatin Liyana Mutalib</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ultrasound equipment or machine is capable to scan in two dimensional (2D) areas. However there are some limitations occur during scanning an object. The problem will occur when scanning process that involving the asymmetric object. In this project, the ultrasound probe holder for asymmetric reflector scanning in 3D image is proposed to make easier for scanning the phantom or object that has asymmetric shape. Initially, the constructed asymmetric phantom that construct will be used in 2D scanning. Next, the asymmetric phantom will be interfaced by the movement of ultrasound probe holder using the Arduino software. After that, the performance of the ultrasound probe holder will be evaluated by using the various asymmetric reflector or phantom in constructing a 3D image <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ultrasound%203D%20images" title="ultrasound 3D images">ultrasound 3D images</a>, <a href="https://publications.waset.org/abstracts/search?q=axial%20and%20lateral%20resolution" title=" axial and lateral resolution"> axial and lateral resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=asymmetric%20reflector" title=" asymmetric reflector"> asymmetric reflector</a>, <a href="https://publications.waset.org/abstracts/search?q=Arduino%20software" title=" Arduino software"> Arduino software</a> </p> <a href="https://publications.waset.org/abstracts/22856/development-of-ultrasounf-probe-holder-for-automatic-scanning-asymmetric-reflector" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22856.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">560</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3735</span> An Object-Based Image Resizing Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chin-Chen%20Chang">Chin-Chen Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=I-Ta%20Lee"> I-Ta Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Tsung-Ta%20Ke"> Tsung-Ta Ke</a>, <a href="https://publications.waset.org/abstracts/search?q=Wen-Kai%20Tai"> Wen-Kai Tai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=energy%20map" title="energy map">energy map</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20saliency" title=" visual saliency"> visual saliency</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20map" title=" gradient map"> gradient map</a>, <a href="https://publications.waset.org/abstracts/search?q=seam%20carving" title=" seam carving"> seam carving</a> </p> <a href="https://publications.waset.org/abstracts/8953/an-object-based-image-resizing-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8953.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3734</span> Definition, Structure, and Core Functions of the State Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosa%20Nurtazina">Rosa Nurtazina</a>, <a href="https://publications.waset.org/abstracts/search?q=Yerkebulan%20Zhumashov"> Yerkebulan Zhumashov</a>, <a href="https://publications.waset.org/abstracts/search?q=Maral%20Tomanova"> Maral Tomanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Humanity is entering an era when 'virtual reality' as the image of the world created by the media with the help of the Internet does not match the reality in many respects, when new communication technologies create a fundamentally different and previously unknown 'global space'. According to these technologies, the state begins to change the basic technology of political communication of the state and society, the state and the state. Nowadays, image of the state becomes the most important tool and technology. Image is a purposefully created image granting political object (person, organization, country, etc.) certain social and political values and promoting more emotional perception. Political image of the state plays an important role in international relations. The success of the country's foreign policy, development of trade and economic relations with other countries depends on whether it is positive or negative. Foreign policy image has an impact on political processes taking place in the state: the negative image of the countries can be used by opposition forces as one of the arguments to criticize the government and its policies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20country" title="image of the country">image of the country</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20classification" title=" country&#039;s image classification"> country&#039;s image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image" title=" function of the country image"> function of the country image</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20components" title=" country&#039;s image components"> country&#039;s image components</a> </p> <a href="https://publications.waset.org/abstracts/5104/definition-structure-and-core-functions-of-the-state-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5104.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">434</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3733</span> Recognition of Objects in a Maritime Environment Using a Combination of Pre- and Post-Processing of the Polynomial Fit Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20R.%20Hordijk">R. R. Hordijk</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20J.%20G.%20Somsen"> O. J. G. Somsen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditionally, radar systems are the eyes and ears of a ship. However, these systems have their drawbacks and nowadays they are extended with systems that work with video and photos. Processing of data from these videos and photos is however very labour-intensive and efforts are being made to automate this process. A major problem when trying to recognize objects in water is that the 'background' is not homogeneous so that traditional image recognition technics do not work well. Main question is, can a method be developed which automate this recognition process. There are a large number of parameters involved to facilitate the identification of objects on such images. One is varying the resolution. In this research, the resolution of some images has been reduced to the extreme value of 1% of the original to reduce clutter before the polynomial fit (pre-processing). It turned out that the searched object was clearly recognizable as its grey value was well above the average. Another approach is to take two images of the same scene shortly after each other and compare the result. Because the water (waves) fluctuates much faster than an object floating in the water one can expect that the object is the only stable item in the two images. Both these methods (pre-processing and comparing two images of the same scene) delivered useful results. Though it is too early to conclude that with these methods all image problems can be solved they are certainly worthwhile for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=polynomial%20fit" title=" polynomial fit"> polynomial fit</a>, <a href="https://publications.waset.org/abstracts/search?q=water" title=" water"> water</a> </p> <a href="https://publications.waset.org/abstracts/34331/recognition-of-objects-in-a-maritime-environment-using-a-combination-of-pre-and-post-processing-of-the-polynomial-fit-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3732</span> A Comprehensive Study of Camouflaged Object Detection Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalak%20Bin%20Khair">Khalak Bin Khair</a>, <a href="https://publications.waset.org/abstracts/search?q=Saqib%20Jahir"> Saqib Jahir</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ibrahim"> Mohammed Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahad%20Bin"> Fahad Bin</a>, <a href="https://publications.waset.org/abstracts/search?q=Debajyoti%20Karmaker"> Debajyoti Karmaker</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection is a computer technology that deals with searching through digital images and videos for occurrences of semantic elements of a particular class. It is associated with image processing and computer vision. On top of object detection, we detect camouflage objects within an image using Deep Learning techniques. Deep learning may be a subset of machine learning that's essentially a three-layer neural network Over 6500 images that possess camouflage properties are gathered from various internet sources and divided into 4 categories to compare the result. Those images are labeled and then trained and tested using vgg16 architecture on the jupyter notebook using the TensorFlow platform. The architecture is further customized using Transfer Learning. Methods for transferring information from one or more of these source tasks to increase learning in a related target task are created through transfer learning. The purpose of this transfer of learning methodologies is to aid in the evolution of machine learning to the point where it is as efficient as human learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=TensorFlow" title=" TensorFlow"> TensorFlow</a>, <a href="https://publications.waset.org/abstracts/search?q=camouflage" title=" camouflage"> camouflage</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architecture" title=" architecture"> architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a> </p> <a href="https://publications.waset.org/abstracts/152633/a-comprehensive-study-of-camouflaged-object-detection-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3731</span> Rough Neural Networks in Adapting Cellular Automata Rule for Reducing Image Noise</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasser%20F.%20Hassan">Yasser F. Hassan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The reduction or removal of noise in a color image is an essential part of image processing, whether the final information is used for human perception or for an automatic inspection and analysis. This paper describes the modeling system based on the rough neural network model to adaptive cellular automata for various image processing tasks and noise remover. In this paper, we consider the problem of object processing in colored image using rough neural networks to help deriving the rules which will be used in cellular automata for noise image. The proposed method is compared with some classical and recent methods. The results demonstrate that the new model is capable of being trained to perform many different tasks, and that the quality of these results is comparable or better than established specialized algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rough%20sets" title="rough sets">rough sets</a>, <a href="https://publications.waset.org/abstracts/search?q=rough%20neural%20networks" title=" rough neural networks"> rough neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=cellular%20automata" title=" cellular automata"> cellular automata</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/1516/rough-neural-networks-in-adapting-cellular-automata-rule-for-reducing-image-noise" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">439</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3730</span> Evaluation of Fusion Sonar and Stereo Camera System for 3D Reconstruction of Underwater Archaeological Object</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yadpiroon%20Onmek">Yadpiroon Onmek</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean%20Triboulet"> Jean Triboulet</a>, <a href="https://publications.waset.org/abstracts/search?q=Sebastien%20Druon"> Sebastien Druon</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Jouvencel"> Bruno Jouvencel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this paper is to develop the 3D underwater reconstruction of archaeology object, which is based on the fusion between a sonar system and stereo camera system. The underwater images are obtained from a calibrated camera system. The multiples image pairs are input, and we first solve the problem of image processing by applying the well-known filter, therefore to improve the quality of underwater images. The features of interest between image pairs are selected by well-known methods: a FAST detector and FLANN descriptor. Subsequently, the RANSAC method is applied to reject outlier points. The putative inliers are matched by triangulation to produce the local sparse point clouds in 3D space, using a pinhole camera model and Euclidean distance estimation. The SFM technique is used to carry out the global sparse point clouds. Finally, the ICP method is used to fusion the sonar information with the stereo model. The final 3D models have a précised by measurement comparing with the real object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20reconstruction" title="3D reconstruction">3D reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=archaeology" title=" archaeology"> archaeology</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20system" title=" stereo system"> stereo system</a>, <a href="https://publications.waset.org/abstracts/search?q=sonar%20system" title=" sonar system"> sonar system</a>, <a href="https://publications.waset.org/abstracts/search?q=underwater" title=" underwater"> underwater</a> </p> <a href="https://publications.waset.org/abstracts/73700/evaluation-of-fusion-sonar-and-stereo-camera-system-for-3d-reconstruction-of-underwater-archaeological-object" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">299</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=125">125</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=126">126</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=3D%20object%20image&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10