CINXE.COM

Search results for: object recognition

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: object recognition</title> <meta name="description" content="Search results for: object recognition"> <meta name="keywords" content="object recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="object recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="object recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2777</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: object recognition</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2777</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2776</span> Genetic Algorithm Based Deep Learning Parameters Tuning for Robot Object Recognition and Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Delowar%20Hossain">Delowar Hossain</a>, <a href="https://publications.waset.org/abstracts/search?q=Genci%20Capi"> Genci Capi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper concerns with the problem of deep learning parameters tuning using a genetic algorithm (GA) in order to improve the performance of deep learning (DL) method. We present a GA based DL method for robot object recognition and grasping. GA is used to optimize the DL parameters in learning procedure in term of the fitness function that is good enough. After finishing the evolution process, we receive the optimal number of DL parameters. To evaluate the performance of our method, we consider the object recognition and robot grasping tasks. Experimental results show that our method is efficient for robot object recognition and grasping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20grasping" title=" robot grasping"> robot grasping</a> </p> <a href="https://publications.waset.org/abstracts/67943/genetic-algorithm-based-deep-learning-parameters-tuning-for-robot-object-recognition-and-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2775</span> The Study on How Social Cues in a Scene Modulate Basic Object Recognition Proces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shih-Yu%20Lo">Shih-Yu Lo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stereotypes exist in almost every society, affecting how people interact with each other. However, to our knowledge, the influence of stereotypes was rarely explored in the context of basic perceptual processes. This study aims to explore how the gender stereotype affects object recognition. Participants were presented with a series of scene pictures, followed by a target display with a man or a woman, holding a weapon or a non-weapon object. The task was to identify whether the object in the target display was a weapon or not. Although the gender of the object holder could not predict whether he or she held a weapon, and was irrelevant to the task goal, the participant nevertheless tended to identify the object as a weapon when the object holder was a man than a woman. The analysis based on the signal detection theory showed that the stereotype effect on object recognition mainly resulted from the participant’s bias to make a 'weapon' response when a man was in the scene instead of a woman in the scene. In addition, there was a trend that the participant’s sensitivity to differentiate a weapon from a non-threating object was higher when a woman was in the scene than a man was in the scene. The results of this study suggest that the irrelevant social cues implied in the visual scene can be very powerful that they can modulate the basic object recognition process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gender%20stereotype" title="gender stereotype">gender stereotype</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20detection%20theory" title=" signal detection theory"> signal detection theory</a>, <a href="https://publications.waset.org/abstracts/search?q=weapon" title=" weapon"> weapon</a> </p> <a href="https://publications.waset.org/abstracts/92535/the-study-on-how-social-cues-in-a-scene-modulate-basic-object-recognition-proces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">209</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2774</span> Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Pavlova">Maria Pavlova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera" title="camera">camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=Raspberry" title=" Raspberry"> Raspberry</a> </p> <a href="https://publications.waset.org/abstracts/81695/object-recognition-system-operating-from-different-type-vehicles-using-raspberry-and-opencv" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81695.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">218</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2773</span> Global Based Histogram for 3D Object Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somar%20Boubou">Somar Boubou</a>, <a href="https://publications.waset.org/abstracts/search?q=Tatsuo%20Narikiyo"> Tatsuo Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Michihiro%20Kawanishi"> Michihiro Kawanishi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we address the problem of 3D object recognition with depth sensors such as Kinect or Structure sensor. Compared with traditional approaches based on local descriptors, which depends on local information around the object key points, we propose a global features based descriptor. Proposed descriptor, which we name as Differential Histogram of Normal Vectors (DHONV), is designed particularly to capture the surface geometric characteristics of the 3D objects represented by depth images. We describe the 3D surface of an object in each frame using a 2D spatial histogram capturing the normalized distribution of differential angles of the surface normal vectors. The object recognition experiments on the benchmark RGB-D object dataset and a self-collected dataset show that our proposed descriptor outperforms two others descriptors based on spin-images and histogram of normal vectors with linear-SVM classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision%20in%20control" title="vision in control">vision in control</a>, <a href="https://publications.waset.org/abstracts/search?q=robotics" title=" robotics"> robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20histogram%20of%20normal%20vectors" title=" differential histogram of normal vectors"> differential histogram of normal vectors</a> </p> <a href="https://publications.waset.org/abstracts/47486/global-based-histogram-for-3d-object-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2772</span> Emotion Recognition in Video and Images in the Wild</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Faizan%20Tariq">Faizan Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Moayid%20Ali%20Zaidi"> Moayid Ali Zaidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial emotion recognition algorithms are expanding rapidly now a day. People are using different algorithms with different combinations to generate best results. There are six basic emotions which are being studied in this area. Author tried to recognize the facial expressions using object detector algorithms instead of traditional algorithms. Two object detection algorithms were chosen which are Faster R-CNN and YOLO. For pre-processing we used image rotation and batch normalization. The dataset I have chosen for the experiments is Static Facial Expression in Wild (SFEW). Our approach worked well but there is still a lot of room to improve it, which will be a future direction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/152635/emotion-recognition-in-video-and-images-in-the-wild" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2771</span> Deep Learning Application for Object Image Recognition and Robot Automatic Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiuh-Jer%20Huang">Shiuh-Jer Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Zon%20Yan"> Chen-Zon Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20K.%20Huang"> C. K. Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Chien%20Ting"> Chun-Chien Ting</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv2" title=" YOLOv2"> YOLOv2</a>, <a href="https://publications.waset.org/abstracts/search?q=7A6%20series%20manipulator" title=" 7A6 series manipulator"> 7A6 series manipulator</a> </p> <a href="https://publications.waset.org/abstracts/110468/deep-learning-application-for-object-image-recognition-and-robot-automatic-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110468.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2770</span> Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sinha">Ankit Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Soham%20Banerjee"> Soham Banerjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Pratik%20Chattopadhyay"> Pratik Chattopadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retail%20stores" title="retail stores">retail stores</a>, <a href="https://publications.waset.org/abstracts/search?q=faster-RCNN" title=" faster-RCNN"> faster-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20localization" title=" object localization"> object localization</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-18" title=" ResNet-18"> ResNet-18</a>, <a href="https://publications.waset.org/abstracts/search?q=triplet%20loss" title=" triplet loss"> triplet loss</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=product%20recognition" title=" product recognition"> product recognition</a> </p> <a href="https://publications.waset.org/abstracts/153836/effective-stacking-of-deep-neural-models-for-automated-object-recognition-in-retail-stores" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2769</span> Object Detection Based on Plane Segmentation and Features Matching for a Service Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ant%C3%B3nio%20J.%20R.%20Neves">António J. R. Neves</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Garcia"> Rui Garcia</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Dias"> Paulo Dias</a>, <a href="https://publications.waset.org/abstracts/search?q=Alina%20Trifan"> Alina Trifan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the aging of the world population and the continuous growth in technology, service robots are more and more explored nowadays as alternatives to healthcare givers or personal assistants for the elderly or disabled people. Any service robot should be capable of interacting with the human companion, receive commands, navigate through the environment, either known or unknown, and recognize objects. This paper proposes an approach for object recognition based on the use of depth information and color images for a service robot. We present a study on two of the most used methods for object detection, where 3D data is used to detect the position of objects to classify that are found on horizontal surfaces. Since most of the objects of interest accessible for service robots are on these surfaces, the proposed 3D segmentation reduces the processing time and simplifies the scene for object recognition. The first approach for object recognition is based on color histograms, while the second is based on the use of the SIFT and SURF feature descriptors. We present comparative experimental results obtained with a real service robot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title="object detection">object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature" title=" feature"> feature</a>, <a href="https://publications.waset.org/abstracts/search?q=descriptors" title=" descriptors"> descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF" title=" SURF"> SURF</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20images" title=" depth images"> depth images</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20robots" title=" service robots"> service robots</a> </p> <a href="https://publications.waset.org/abstracts/39840/object-detection-based-on-plane-segmentation-and-features-matching-for-a-service-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">545</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2768</span> Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fidel%20L%C3%B2pez%20Saca">Fidel Lòpez Saca</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Avil%C3%A9s-Cruz"> Carlos Avilés-Cruz</a>, <a href="https://publications.waset.org/abstracts/search?q=Miguel%20Magos-Rivera"> Miguel Magos-Rivera</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Antonio%20Lara-Ch%C3%A1vez"> José Antonio Lara-Chávez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor&rsquo;s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title="deep-learning">deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20identification" title=" image identification"> image identification</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20engineering." title=" industrial engineering."> industrial engineering.</a> </p> <a href="https://publications.waset.org/abstracts/126071/automatic-product-identification-based-on-deep-learning-theory-in-an-assembly-line" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126071.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2767</span> Detecting Characters as Objects Towards Character Recognition on Licence Plates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alden%20Boby">Alden Boby</a>, <a href="https://publications.waset.org/abstracts/search?q=Dane%20Brown"> Dane Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Connan"> James Connan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Character recognition is a well-researched topic across disciplines. Regardless, creating a solution that can cater to multiple situations is still challenging. Vehicle licence plates lack an international standard, meaning that different countries and regions have their own licence plate format. A problem that arises from this is that the typefaces and designs from different regions make it difficult to create a solution that can cater to a wide range of licence plates. The main issue concerning detection is the character recognition stage. This paper aims to create an object detection-based character recognition model trained on a custom dataset that consists of typefaces of licence plates from various regions. Given that characters have featured consistently maintained across an array of fonts, YOLO can be trained to recognise characters based on these features, which may provide better performance than OCR methods such as Tesseract OCR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=licence%20plate%20recognition" title=" licence plate recognition"> licence plate recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a> </p> <a href="https://publications.waset.org/abstracts/155443/detecting-characters-as-objects-towards-character-recognition-on-licence-plates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2766</span> A Human Activity Recognition System Based on Sensory Data Related to Object Usage </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Abdullah">M. Abdullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Al-Wadud"> Al-Wadud</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sensor-based activity recognition systems usually accounts which sensors have been activated to perform an activity. The system then combines the conditional probabilities of those sensors to represent different activities and takes the decision based on that. However, the information about the sensors which are not activated may also be of great help in deciding which activity has been performed. This paper proposes an approach where the sensory data related to both usage and non-usage of objects are utilized to make the classification of activities. Experimental results also show the promising performance of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Na%C3%AFve%20Bayesian" title="Naïve Bayesian">Naïve Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=based%20classification" title=" based classification"> based classification</a>, <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title=" activity recognition"> activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20data" title=" sensor data"> sensor data</a>, <a href="https://publications.waset.org/abstracts/search?q=object-usage%20model" title=" object-usage model"> object-usage model</a> </p> <a href="https://publications.waset.org/abstracts/4112/a-human-activity-recognition-system-based-on-sensory-data-related-to-object-usage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4112.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2765</span> 6D Posture Estimation of Road Vehicles from Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoshimoto%20Kurihara">Yoshimoto Kurihara</a>, <a href="https://publications.waset.org/abstracts/search?q=Tad%20Gonsalves"> Tad Gonsalves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=6D%20posture%20estimation" title="6D posture estimation">6D posture estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a> </p> <a href="https://publications.waset.org/abstracts/138449/6d-posture-estimation-of-road-vehicles-from-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2764</span> Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Wei">Hui Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Dong"> Zheng Dong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20model" title="biological model">biological model</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20neural%20network" title=" multi-layer neural network"> multi-layer neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a> </p> <a href="https://publications.waset.org/abstracts/25221/local-image-features-emerging-from-brain-inspired-multi-layer-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">542</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2763</span> Recognition of Objects in a Maritime Environment Using a Combination of Pre- and Post-Processing of the Polynomial Fit Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20R.%20Hordijk">R. R. Hordijk</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20J.%20G.%20Somsen"> O. J. G. Somsen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditionally, radar systems are the eyes and ears of a ship. However, these systems have their drawbacks and nowadays they are extended with systems that work with video and photos. Processing of data from these videos and photos is however very labour-intensive and efforts are being made to automate this process. A major problem when trying to recognize objects in water is that the 'background' is not homogeneous so that traditional image recognition technics do not work well. Main question is, can a method be developed which automate this recognition process. There are a large number of parameters involved to facilitate the identification of objects on such images. One is varying the resolution. In this research, the resolution of some images has been reduced to the extreme value of 1% of the original to reduce clutter before the polynomial fit (pre-processing). It turned out that the searched object was clearly recognizable as its grey value was well above the average. Another approach is to take two images of the same scene shortly after each other and compare the result. Because the water (waves) fluctuates much faster than an object floating in the water one can expect that the object is the only stable item in the two images. Both these methods (pre-processing and comparing two images of the same scene) delivered useful results. Though it is too early to conclude that with these methods all image problems can be solved they are certainly worthwhile for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=polynomial%20fit" title=" polynomial fit"> polynomial fit</a>, <a href="https://publications.waset.org/abstracts/search?q=water" title=" water"> water</a> </p> <a href="https://publications.waset.org/abstracts/34331/recognition-of-objects-in-a-maritime-environment-using-a-combination-of-pre-and-post-processing-of-the-polynomial-fit-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2762</span> A Contribution to Human Activities Recognition Using Expert System Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with human activity recognition from sensor data. It is an active research area, and the main objective is to obtain a high recognition rate. In this work, a recognition system based on expert systems is proposed; the recognition is performed using the objects, object states, and gestures and taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions and the activity). The system recognizes complex activities after decomposing them into simple, easy-to-recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/171721/a-contribution-to-human-activities-recognition-using-expert-system-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171721.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2761</span> Human Activities Recognition Based on Expert System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of human activities from sensor data is an active research area, and the main objective is to obtain a high recognition rate. In this work, we propose a recognition system based on expert systems. The proposed system makes the recognition based on the objects, object states, and gestures, taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions, and the activity). This work focuses on complex activities which are decomposed into simple easy to recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/151943/human-activities-recognition-based-on-expert-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2760</span> A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mais%20Nijim">Mais Nijim</a>, <a href="https://publications.waset.org/abstracts/search?q=Rama%20Devi%20Chennuboyina"> Rama Devi Chennuboyina</a>, <a href="https://publications.waset.org/abstracts/search?q=Waseem%20Al%20Aqqad"> Waseem Al Aqqad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=waterbody%20identification" title=" waterbody identification"> waterbody identification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/38236/a-supervised-learning-data-mining-approach-for-object-recognition-and-classification-in-high-resolution-satellite-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38236.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2759</span> Canonical Objects and Other Objects in Arabic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Safiah%20Ahmed%20Madkhali">Safiah Ahmed Madkhali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The grammatical relation object has not attracted the same attention in the literature as subject has. Where there is a clearly monotransitive verb such as kick, the criteria for identifying the grammatical relation may converge. However, the term object is also used to refer to phenomena that do not subsume all, or even most, of the recognized properties of the canonical object. Instances of such phenomena include non-canonical objects such as the ones in the so-called double-object construction i.e. the indirect object and the direct object as in (He bought his dog a new collar). In this paper, it is demonstrated how criteria of identifying the grammatical relation object that are found in the theoretical and typological literature can be applied to Arabic. Also, further language-specific criteria are here derived from the regularities of the canonical object in the language. The criteria established in this way are then applied to the non-canonical objects to demonstrate how far they conform to, or diverge from, the canonical object. Contrary to the claim that the direct object is more similar to the canonical object than is the indirect object, it was found that it is, in fact, the indirect object rather than the direct object that shares most of the aspects of the canonical object in monotransitive clauses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canonical%20objects" title="canonical objects">canonical objects</a>, <a href="https://publications.waset.org/abstracts/search?q=double-object%20constructions" title=" double-object constructions"> double-object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=cognate%20object%20constructions" title=" cognate object constructions"> cognate object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=non-canonical%20objects" title=" non-canonical objects"> non-canonical objects</a> </p> <a href="https://publications.waset.org/abstracts/141579/canonical-objects-and-other-objects-in-arabic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2758</span> An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20S.%20Al-Rawi">Mohammed S. Al-Rawi</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Bastos"> J. Bastos</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Rodriguez"> J. Rodriguez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chebyshev%20polynomial" title="Chebyshev polynomial">Chebyshev polynomial</a>, <a href="https://publications.waset.org/abstracts/search?q=fourier%20transform" title=" fourier transform"> fourier transform</a>, <a href="https://publications.waset.org/abstracts/search?q=fast%20algorithms" title=" fast algorithms"> fast algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pseudo%20Zernike%20moments" title=" pseudo Zernike moments"> pseudo Zernike moments</a>, <a href="https://publications.waset.org/abstracts/search?q=Zernike%20moments" title=" Zernike moments"> Zernike moments</a> </p> <a href="https://publications.waset.org/abstracts/58226/an-accurate-computation-of-2d-zernike-moments-via-fast-fourier-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">265</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2757</span> A Review on Artificial Neural Networks in Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Afsharipoor">B. Afsharipoor</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Nazemi"> E. Nazemi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artificial neural networks (ANNs) are powerful tool for prediction which can be trained based on a set of examples and thus, it would be useful for nonlinear image processing. The present paper reviews several paper regarding applications of ANN in image processing to shed the light on advantage and disadvantage of ANNs in this field. Different steps in the image processing chain including pre-processing, enhancement, segmentation, object recognition, image understanding and optimization by using ANN are summarized. Furthermore, results on using multi artificial neural networks are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20understanding" title=" image understanding"> image understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=MANN" title=" MANN"> MANN</a> </p> <a href="https://publications.waset.org/abstracts/36843/a-review-on-artificial-neural-networks-in-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36843.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2756</span> Object Recognition Approach Based on Generalized Hough Transform and Color Distribution Serving in Generating Arabic Sentences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Farhani">Nada Farhani</a>, <a href="https://publications.waset.org/abstracts/search?q=Naim%20Terbeh"> Naim Terbeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Zrigui"> Mounir Zrigui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The recognition of the objects contained in images has always presented a challenge in the field of research because of several difficulties that the researcher can envisage because of the variability of shape, position, contrast of objects, etc. In this paper, we will be interested in the recognition of objects. The classical Hough Transform (HT) presented a tool for detecting straight line segments in images. The technique of HT has been generalized (GHT) for the detection of arbitrary forms. With GHT, the forms sought are not necessarily defined analytically but rather by a particular silhouette. For more precision, we proposed to combine the results from the GHT with the results from a calculation of similarity between the histograms and the spatiograms of the images. The main purpose of our work is to use the concepts from recognition to generate sentences in Arabic that summarize the content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recognition%20of%20shape" title="recognition of shape">recognition of shape</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20hough%20transformation" title=" generalized hough transformation"> generalized hough transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiogram" title=" spatiogram"> spatiogram</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/101706/object-recognition-approach-based-on-generalized-hough-transform-and-color-distribution-serving-in-generating-arabic-sentences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101706.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2755</span> Vision-Based Hand Segmentation Techniques for Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jebali">M. Jebali</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jemni"> M. Jemni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20segmentation" title=" hand segmentation"> hand segmentation</a> </p> <a href="https://publications.waset.org/abstracts/26490/vision-based-hand-segmentation-techniques-for-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2754</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2753</span> Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jigg%20Pelayo">Jigg Pelayo</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Villar"> Ricardo Villar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithm" title="algorithm">algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=OBIA" title=" OBIA"> OBIA</a> </p> <a href="https://publications.waset.org/abstracts/58677/spatial-object-oriented-template-matching-algorithm-using-normalized-cross-correlation-criterion-for-tracking-aerial-image-scene" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">244</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2752</span> Make Up Flash: Web Application for the Improvement of Physical Appearance in Images Based on Recognition Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefania%20Arguelles%20Reyes">Stefania Arguelles Reyes</a>, <a href="https://publications.waset.org/abstracts/search?q=Octavio%20Jos%C3%A9%20Salcedo%20Parra"> Octavio José Salcedo Parra</a>, <a href="https://publications.waset.org/abstracts/search?q=Alberto%20Acosta%20L%C3%B3pez"> Alberto Acosta López</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a web application for the improvement of images through recognition. The web application is based on the analysis of picture-based recognition methods that allow an improvement on the physical appearance of people posting in social networks. The basis relies on the study of tools that can correct or improve some features of the face, with the help of a wide collection of user images taken as reference to build a facial profile. Automatic facial profiling can be achieved with a deeper study of the Object Detection Library. It was possible to improve the initial images with the help of MATLAB and its filtering functions. The user can have a direct interaction with the program and manually adjust his preferences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Matlab" title="Matlab">Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=make%20up" title=" make up"> make up</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20methods" title=" recognition methods"> recognition methods</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20application" title=" web application"> web application</a> </p> <a href="https://publications.waset.org/abstracts/108091/make-up-flash-web-application-for-the-improvement-of-physical-appearance-in-images-based-on-recognition-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108091.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2751</span> A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mulugeta%20K.%20Tefera">Mulugeta K. Tefera</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaolong%20Yang"> Xiaolong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Liu"> Jian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20modeling" title="background modeling">background modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-frame%20difference" title=" inter-frame difference"> inter-frame difference</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20and%20tracking" title=" object detection and tracking"> object detection and tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/78578/a-real-time-moving-object-detection-and-tracking-scheme-and-its-implementation-for-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2750</span> Deep Learning Based Unsupervised Sport Scene Recognition and Highlights Generation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ksenia%20Meshkova">Ksenia Meshkova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With increasing amount of multimedia data, it is very important to automate and speed up the process of obtaining meta. This process means not just recognition of some object or its movement, but recognition of the entire scene versus separate frames and having timeline segmentation as a final result. Labeling datasets is time consuming, besides, attributing characteristics to particular scenes is clearly difficult due to their nature. In this article, we will consider autoencoders application to unsupervised scene recognition and clusterization based on interpretable features. Further, we will focus on particular types of auto encoders that relevant to our study. We will take a look at the specificity of deep learning related to information theory and rate-distortion theory and describe the solutions empowering poor interpretability of deep learning in media content processing. As a conclusion, we will present the results of the work of custom framework, based on autoencoders, capable of scene recognition as was deeply studied above, with highlights generation resulted out of this recognition. We will not describe in detail the mathematical description of neural networks work but will clarify the necessary concepts and pay attention to important nuances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=representation%20learning" title=" representation learning"> representation learning</a>, <a href="https://publications.waset.org/abstracts/search?q=autoencoders" title=" autoencoders"> autoencoders</a> </p> <a href="https://publications.waset.org/abstracts/126046/deep-learning-based-unsupervised-sport-scene-recognition-and-highlights-generation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126046.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2749</span> 3D Human Face Reconstruction in Unstable Conditions </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoyuan%20Suo">Xiaoyuan Suo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 3D object reconstruction is a broad research area within the computer vision field involving many stages and still open problems. One of the existing challenges in this field lies with micromotion, such as the facial expressions on the appearance of the human or animal face. Similar literatures in this field focuses on 3D reconstruction in stable conditions such as an existing image or photos taken in a rather static environment, while the purpose of this work is to discuss a flexible scan system using multiple cameras that can correctly reconstruct 3D stable and moving objects -- human face with expression in particular. Further, a mathematical model is proposed at the end of this literature to automate the 3D object reconstruction process. The reconstruction process takes several stages. Firstly, a set of simple 2D lines would be projected onto the object and hence a set of uneven curvy lines can be obtained, which represents the 3D numerical data of the surface. The lines and their shapes will help to identify object’s 3D construction in pixels. With the two-recorded angles and their distance from the camera, a simple mathematical calculation would give the resulting coordinate of each projected line in an absolute 3D space. This proposed research will benefit many practical areas, including but not limited to biometric identification, authentications, cybersecurity, preservation of cultural heritage, drama acting especially those with rapid and complex facial gestures, and many others. Specifically, this will (I) provide a brief survey of comparable techniques existing in this field. (II) discuss a set of specialized methodologies or algorithms for effective reconstruction of 3D objects. (III)implement, and testing the developed methodologies. (IV) verify findings with data collected from experiments. (V) conclude with lessons learned and final thoughts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20photogrammetry" title="3D photogrammetry">3D photogrammetry</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20object%20reconstruction" title=" 3D object reconstruction"> 3D object reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition" title=" facial expression recognition"> facial expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a> </p> <a href="https://publications.waset.org/abstracts/92745/3d-human-face-reconstruction-in-unstable-conditions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2748</span> Hand Gesture Detection via EmguCV Canny Pruning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20N.%20Mosola">N. N. Mosola</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Molete"> S. J. Molete</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20S.%20Masoebe"> L. S. Masoebe</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Letsae"> M. Letsae</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho&rsquo;s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canny%20pruning" title="canny pruning">canny pruning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20tracking" title=" skin tracking"> skin tracking</a> </p> <a href="https://publications.waset.org/abstracts/91296/hand-gesture-detection-via-emgucv-canny-pruning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=92">92</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=93">93</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20recognition&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10