CINXE.COM
Search results for: Object Identification
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Object Identification</title> <meta name="description" content="Search results for: Object Identification"> <meta name="keywords" content="Object Identification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Object Identification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Object Identification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4079</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Object Identification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4079</span> Object-Oriented Program Comprehension by Identification of Software Components and Their Connexions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelhak-Djamel%20Seriai">Abdelhak-Djamel Seriai</a>, <a href="https://publications.waset.org/abstracts/search?q=Selim%20Kebir"> Selim Kebir</a>, <a href="https://publications.waset.org/abstracts/search?q=Allaoua%20Chaoui"> Allaoua Chaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the last decades, object oriented program- ming has been massively used to build large-scale systems. However, evolution and maintenance of such systems become a laborious task because of the lack of object oriented programming to offer a precise view of the functional building blocks of the system. This lack is caused by the fine granularity of classes and objects. In this paper, we use a post object-oriented technology namely software components, to propose an approach based on the identification of the functional building blocks of an object oriented system by analyzing its source code. These functional blocks are specified as software components and the result is a multi-layer component based software architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=software%20comprehension" title="software comprehension">software comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20component" title=" software component"> software component</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20oriented" title=" object oriented"> object oriented</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20architecture" title=" software architecture"> software architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=reverse%20engineering" title=" reverse engineering"> reverse engineering</a> </p> <a href="https://publications.waset.org/abstracts/32119/object-oriented-program-comprehension-by-identification-of-software-components-and-their-connexions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32119.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4078</span> Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fidel%20L%C3%B2pez%20Saca">Fidel Lòpez Saca</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Avil%C3%A9s-Cruz"> Carlos Avilés-Cruz</a>, <a href="https://publications.waset.org/abstracts/search?q=Miguel%20Magos-Rivera"> Miguel Magos-Rivera</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Antonio%20Lara-Ch%C3%A1vez"> José Antonio Lara-Chávez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title="deep-learning">deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20identification" title=" image identification"> image identification</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20engineering." title=" industrial engineering."> industrial engineering.</a> </p> <a href="https://publications.waset.org/abstracts/126071/automatic-product-identification-based-on-deep-learning-theory-in-an-assembly-line" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126071.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4077</span> Methodology for the Integration of Object Identification Processes in Handling and Logistic Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=L.%20Kiefer">L. Kiefer</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Richter"> C. Richter</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Reinhart"> G. Reinhart</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The uprising complexity in production systems due to an increasing amount of variants up to customer innovated products leads to requirements that hierarchical control systems are not able to fulfil. Therefore, factory planners can install autonomous manufacturing systems. The fundamental requirement for an autonomous control is the identification of objects within production systems. In this approach an attribute-based identification is focused for avoiding dose-dependent identification costs. Instead of using an identification mark (ID) like a radio frequency identification (RFID)-Tag, an object type is directly identified by its attributes. To facilitate that it’s recommended to include the identification and the corresponding sensors within handling processes, which connect all manufacturing processes and therefore ensure a high identification rate and reduce blind spots. The presented methodology reduces the individual effort to integrate identification processes in handling systems. First, suitable object attributes and sensor systems for object identification in a production environment are defined. By categorising these sensor systems as well as handling systems, it is possible to match them universal within a compatibility matrix. Based on that compatibility further requirements like identification time are analysed, which decide whether the combination of handling and sensor system is well suited for parallel handling and identification within an autonomous control. By analysing a list of more than thousand possible attributes, first investigations have shown, that five main characteristics (weight, form, colour, amount, and position of subattributes as drillings) are sufficient for an integrable identification. This knowledge limits the variety of identification systems and leads to a manageable complexity within the selection process. Besides the procedure, several tools, as an example a sensor pool are presented. These tools include the generated specific expert knowledge and simplify the selection. The primary tool is a pool of preconfigured identification processes depending on the chosen combination of sensor and handling device. By following the defined procedure and using the created tools, even laypeople out of other scientific fields can choose an appropriate combination of handling devices and sensors which enable parallel handling and identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=agent%20systems" title="agent systems">agent systems</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous%20control" title=" autonomous control"> autonomous control</a>, <a href="https://publications.waset.org/abstracts/search?q=handling%20systems" title=" handling systems"> handling systems</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a> </p> <a href="https://publications.waset.org/abstracts/91481/methodology-for-the-integration-of-object-identification-processes-in-handling-and-logistic-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91481.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4076</span> Canonical Objects and Other Objects in Arabic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Safiah%20Ahmed%20Madkhali">Safiah Ahmed Madkhali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The grammatical relation object has not attracted the same attention in the literature as subject has. Where there is a clearly monotransitive verb such as kick, the criteria for identifying the grammatical relation may converge. However, the term object is also used to refer to phenomena that do not subsume all, or even most, of the recognized properties of the canonical object. Instances of such phenomena include non-canonical objects such as the ones in the so-called double-object construction i.e. the indirect object and the direct object as in (He bought his dog a new collar). In this paper, it is demonstrated how criteria of identifying the grammatical relation object that are found in the theoretical and typological literature can be applied to Arabic. Also, further language-specific criteria are here derived from the regularities of the canonical object in the language. The criteria established in this way are then applied to the non-canonical objects to demonstrate how far they conform to, or diverge from, the canonical object. Contrary to the claim that the direct object is more similar to the canonical object than is the indirect object, it was found that it is, in fact, the indirect object rather than the direct object that shares most of the aspects of the canonical object in monotransitive clauses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canonical%20objects" title="canonical objects">canonical objects</a>, <a href="https://publications.waset.org/abstracts/search?q=double-object%20constructions" title=" double-object constructions"> double-object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=cognate%20object%20constructions" title=" cognate object constructions"> cognate object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=non-canonical%20objects" title=" non-canonical objects"> non-canonical objects</a> </p> <a href="https://publications.waset.org/abstracts/141579/canonical-objects-and-other-objects-in-arabic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4075</span> Humeral Head and Scapula Detection in Proton Density Weighted Magnetic Resonance Images Using YOLOv8</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aysun%20Sezer">Aysun Sezer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Magnetic Resonance Imaging (MRI) is one of the advanced diagnostic tools for evaluating shoulder pathologies. Proton Density (PD)-weighted MRI sequences prove highly effective in detecting edema. However, they are deficient in the anatomical identification of bones due to a trauma-induced decrease in signal-to-noise ratio and blur in the traumatized cortices. Computer-based diagnostic systems require precise segmentation, identification, and localization of anatomical regions in medical imagery. Deep learning-based object detection algorithms exhibit remarkable proficiency in real-time object identification and localization. In this study, the YOLOv8 model was employed to detect humeral head and scapular regions in 665 axial PD-weighted MR images. The YOLOv8 configuration achieved an overall success rate of 99.60% and 89.90% for detecting the humeral head and scapula, respectively, with an intersection over union (IoU) of 0.5. Our findings indicate a significant promise of employing YOLOv8-based detection for the humerus and scapula regions, particularly in the context of PD-weighted images affected by both noise and intensity inhomogeneity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title="YOLOv8">YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=humerus" title=" humerus"> humerus</a>, <a href="https://publications.waset.org/abstracts/search?q=scapula" title=" scapula"> scapula</a>, <a href="https://publications.waset.org/abstracts/search?q=IRM" title=" IRM"> IRM</a> </p> <a href="https://publications.waset.org/abstracts/175663/humeral-head-and-scapula-detection-in-proton-density-weighted-magnetic-resonance-images-using-yolov8" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175663.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4074</span> Evidence of the Effect of the Structure of Social Representations on Group Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eric%20Bonetto">Eric Bonetto</a>, <a href="https://publications.waset.org/abstracts/search?q=Anthony%20Piermatteo"> Anthony Piermatteo</a>, <a href="https://publications.waset.org/abstracts/search?q=Fabien%20Girandola"> Fabien Girandola</a>, <a href="https://publications.waset.org/abstracts/search?q=Gregory%20Lo%20Monaco"> Gregory Lo Monaco</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present contribution focuses on the effect of the structure of social representations on group identification. A social representation (SR) is defined as an organized and structured set of cognitions, produced and shared by members of a same group about a same social object. Within this framework, the central core theory establishes a structural distinction between central cognitions – or 'core' – and peripheral ones: the former are theoretically considered as more connected than the later to group members’ social identity and may play a greater role in SRs’ ability to allow group identification by means of a common vision of the object of representation. Indeed, the central core provides a reference point for the in-group as it constitutes a consensual vision that gives meaning to a social object particularly important to individuals and to the group. However, while numerous contributions clearly refer to the underlying role of SRs in group identification, there are only few empirical evidences of this aspect. Thus, we hypothesize an effect of the structure of SRs on group identification. More precisely, central cognitions (vs. peripheral ones) will lead to a stronger group identification. In addition, we hypothesize that the refutation of a cognition will lead to a stronger group identification than its activation. The SR mobilized here is that of 'studying' among a population of first-year undergraduate psychology students. Thus, a pretest (N = 82), using an Attribute-Challenge Technique, was designed in order to identify the central and the peripheral cognitions to use in the primings of our main study. The results of this pretest are in line with previous studies. Then, the main study (online; N = 184), using a social priming methodology, was based on a 2 (Structural status of the cognitions belonging to the prime: central vs. peripheral) x 2 (Type of prime: activation vs. refutation) experimental design in order to test our hypotheses. Results revealed, as expected, the main effect of the structure of the SR on group identification. Indeed, central cognitions trigger a higher level of identification than the peripheral ones. However, we observe neither effect of the type of prime, nor interaction effect. These results experimentally demonstrate for the first time the effect of the structure of SRs on group identification and indicate that central cognitions are more connected than peripheral ones to group members’ social identity. These results will be discussed considering the importance of understanding identity as a function of SRs and on their ability to potentially solve the lack of consideration of the definition of the group in Social Representations Theory. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=group%20identification" title="group identification">group identification</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20identity" title=" social identity"> social identity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20representations" title=" social representations"> social representations</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20approach" title=" structural approach"> structural approach</a> </p> <a href="https://publications.waset.org/abstracts/85103/evidence-of-the-effect-of-the-structure-of-social-representations-on-group-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85103.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">192</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4073</span> An Image Processing Scheme for Skin Fungal Disease Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20A.%20M.%20A.%20S.%20S.%20Perera">A. A. M. A. S. S. Perera</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20A.%20Ranasinghe"> L. A. Ranasinghe</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20K.%20H.%20Nimeshika"> T. K. H. Nimeshika</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20M.%20Dhanushka%20Dissanayake"> D. M. Dhanushka Dissanayake</a>, <a href="https://publications.waset.org/abstracts/search?q=Namalie%20Walgampaya"> Namalie Walgampaya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, skin fungal diseases are mostly found in people of tropical countries like Sri Lanka. A skin fungal disease is a particular kind of illness caused by fungus. These diseases have various dangerous effects on the skin and keep on spreading over time. It becomes important to identify these diseases at their initial stage to control it from spreading. This paper presents an automated skin fungal disease identification system implemented to speed up the diagnosis process by identifying skin fungal infections in digital images. An image of the diseased skin lesion is acquired and a comprehensive computer vision and image processing scheme is used to process the image for the disease identification. This includes colour analysis using RGB and HSV colour models, texture classification using Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix and Local Binary Pattern, Object detection, Shape Identification and many more. This paper presents the approach and its outcome for identification of four most common skin fungal infections, namely, Tinea Corporis, Sporotrichosis, Malassezia and Onychomycosis. The main intention of this research is to provide an automated skin fungal disease identification system that increase the diagnostic quality, shorten the time-to-diagnosis and improve the efficiency of detection and successful treatment for skin fungal diseases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Circularity%20Index" title="Circularity Index">Circularity Index</a>, <a href="https://publications.waset.org/abstracts/search?q=Grey%20Level%20Run%20Length%20Matrix" title=" Grey Level Run Length Matrix"> Grey Level Run Length Matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=Grey%20Level%20Co-Occurrence%20Matrix" title=" Grey Level Co-Occurrence Matrix"> Grey Level Co-Occurrence Matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=Local%20Binary%20Pattern" title=" Local Binary Pattern"> Local Binary Pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=Object%20detection" title=" Object detection"> Object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Ring%20Detection" title=" Ring Detection"> Ring Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Shape%20Identification" title=" Shape Identification"> Shape Identification</a> </p> <a href="https://publications.waset.org/abstracts/82490/an-image-processing-scheme-for-skin-fungal-disease-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4072</span> Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subham%20Kharel">Subham Kharel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudha%20Ravindranath"> Sudha Ravindranath</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Vidya"> A. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Chandrasekaran"> B. Chandrasekaran</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Ganesha%20Raj"> K. Ganesha Raj</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Shesadri"> T. Shesadri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20oriented%20classification" title="object oriented classification">object oriented classification</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20extraction" title=" shadow extraction"> shadow extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=high-rise%20buildings" title=" high-rise buildings"> high-rise buildings</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20technology" title=" spatial technology"> spatial technology</a> </p> <a href="https://publications.waset.org/abstracts/130749/identification-of-high-rise-buildings-using-object-based-classification-and-shadow-extraction-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130749.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4071</span> Application of Low-order Modeling Techniques and Neural-Network Based Models for System Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Venkatesh%20Pulletikurthi">Venkatesh Pulletikurthi</a>, <a href="https://publications.waset.org/abstracts/search?q=Karthik%20B.%20Ariyur"> Karthik B. Ariyur</a>, <a href="https://publications.waset.org/abstracts/search?q=Luciano%20Castillo"> Luciano Castillo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The system identification from the turbulence wakes will lead to the tactical advantage to prepare and also, to predict the trajectory of the opponents’ movements. A low-order modeling technique, POD, is used to predict the object based on the wake pattern and compared with pre-trained image recognition neural network (NN) to classify the wake patterns into objects. It is demonstrated that low-order modeling, POD, is able to predict the objects better compared to pretrained NN by ~30%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20bluff%20body%20wakes" title="the bluff body wakes">the bluff body wakes</a>, <a href="https://publications.waset.org/abstracts/search?q=low-order%20modeling" title=" low-order modeling"> low-order modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a> </p> <a href="https://publications.waset.org/abstracts/146168/application-of-low-order-modeling-techniques-and-neural-network-based-models-for-system-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146168.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">182</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4070</span> Software Component Identification from Its Object-Oriented Code: Graph Metrics Based Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manel%20Brichni">Manel Brichni</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhak-Djamel%20Seriai"> Abdelhak-Djamel Seriai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Systems are increasingly complex. To reduce their complexity, an abstract view of the system can simplify its development. To overcome this problem, we propose a method to decompose systems into subsystems while reducing their coupling. These subsystems represent components. Consisting of an existing object-oriented systems, the main idea of our approach is based on modelling as graphs all entities of an oriented object source code. Such modelling is easy to handle, so we can apply restructuring algorithms based on graph metrics. The particularity of our approach consists in integrating in addition to standard metrics, such as coupling and cohesion, some graph metrics giving more precision during the components identication. To treat this problem, we relied on the ROMANTIC approach that proposed a component-based software architecture recovery from an object oriented system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=software%20reengineering" title="software reengineering">software reengineering</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20component%0D%0Aand%20interfaces" title=" software component and interfaces"> software component and interfaces</a>, <a href="https://publications.waset.org/abstracts/search?q=metrics" title=" metrics"> metrics</a>, <a href="https://publications.waset.org/abstracts/search?q=graphs" title=" graphs"> graphs</a> </p> <a href="https://publications.waset.org/abstracts/13322/software-component-identification-from-its-object-oriented-code-graph-metrics-based-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13322.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">501</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4069</span> When Pain Becomes Love For God: The Non-Object Self</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Roni%20Naor-Hofri">Roni Naor-Hofri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper shows how self-inflicted pain enabled the expression of love for God among Christian monastic ascetics in medieval central Europe. As scholars have shown, being in a state of pain leads to a change in or destruction of language, an essential feature of the self. The author argues that this transformation allows the self to transcend its boundaries as an object, even if only temporarily and in part. The epistemic achievement of love for God, a non-object, would not otherwise have been possible. To substantiate her argument, the author shows that the self’s transformation into a non-object enables the imitation of God: not solely in the sense of imitatio Christi, of physical and visual representations of God incarnate in the flesh of His son Christ, but also in the sense of the self’s experience of being a non-object, just like God, the target of the self’s love. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=love%20for%20God" title="love for God ">love for God </a>, <a href="https://publications.waset.org/abstracts/search?q=pain" title=" pain"> pain</a>, <a href="https://publications.waset.org/abstracts/search?q=philosophy" title=" philosophy"> philosophy</a>, <a href="https://publications.waset.org/abstracts/search?q=religion" title=" religion"> religion</a> </p> <a href="https://publications.waset.org/abstracts/135417/when-pain-becomes-love-for-god-the-non-object-self" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135417.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">244</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4068</span> Pose Normalization Network for Object Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Shen">Bingquan Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20classification" title=" object classification"> object classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20normalization" title=" pose normalization"> pose normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=viewpoint%20invariant" title=" viewpoint invariant"> viewpoint invariant</a> </p> <a href="https://publications.waset.org/abstracts/56852/pose-normalization-network-for-object-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56852.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4067</span> The Study on How Social Cues in a Scene Modulate Basic Object Recognition Proces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shih-Yu%20Lo">Shih-Yu Lo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stereotypes exist in almost every society, affecting how people interact with each other. However, to our knowledge, the influence of stereotypes was rarely explored in the context of basic perceptual processes. This study aims to explore how the gender stereotype affects object recognition. Participants were presented with a series of scene pictures, followed by a target display with a man or a woman, holding a weapon or a non-weapon object. The task was to identify whether the object in the target display was a weapon or not. Although the gender of the object holder could not predict whether he or she held a weapon, and was irrelevant to the task goal, the participant nevertheless tended to identify the object as a weapon when the object holder was a man than a woman. The analysis based on the signal detection theory showed that the stereotype effect on object recognition mainly resulted from the participant’s bias to make a 'weapon' response when a man was in the scene instead of a woman in the scene. In addition, there was a trend that the participant’s sensitivity to differentiate a weapon from a non-threating object was higher when a woman was in the scene than a man was in the scene. The results of this study suggest that the irrelevant social cues implied in the visual scene can be very powerful that they can modulate the basic object recognition process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gender%20stereotype" title="gender stereotype">gender stereotype</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20detection%20theory" title=" signal detection theory"> signal detection theory</a>, <a href="https://publications.waset.org/abstracts/search?q=weapon" title=" weapon"> weapon</a> </p> <a href="https://publications.waset.org/abstracts/92535/the-study-on-how-social-cues-in-a-scene-modulate-basic-object-recognition-proces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4066</span> Integration of Wireless Sensor Networks and Radio Frequency Identification (RFID): An Assesment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arslan%20Murtaza">Arslan Murtaza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> RFID (Radio Frequency Identification) and WSN (Wireless sensor network) are two significant wireless technologies that have extensive diversity of applications and provide limitless forthcoming potentials. RFID is used to identify existence and location of objects whereas WSN is used to intellect and monitor the environment. Incorporating RFID with WSN not only provides identity and location of an object but also provides information regarding the condition of the object carrying the sensors enabled RFID tag. It can be widely used in stock management, asset tracking, asset counting, security, military, environmental monitoring and forecasting, healthcare, intelligent home, intelligent transport vehicles, warehouse management, and precision agriculture. This assessment presents a brief introduction of RFID, WSN, and integration of WSN and RFID, and then applications related to both RFID and WSN. This assessment also deliberates status of the projects on RFID technology carried out in different computing group projects to be taken on WSN and RFID technology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20sensor%20network" title="wireless sensor network">wireless sensor network</a>, <a href="https://publications.waset.org/abstracts/search?q=RFID" title=" RFID"> RFID</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20sensor" title=" embedded sensor"> embedded sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=Wi-Fi" title=" Wi-Fi"> Wi-Fi</a>, <a href="https://publications.waset.org/abstracts/search?q=Bluetooth" title=" Bluetooth"> Bluetooth</a>, <a href="https://publications.waset.org/abstracts/search?q=integration" title=" integration"> integration</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20saving" title=" time saving"> time saving</a>, <a href="https://publications.waset.org/abstracts/search?q=cost%20efficient" title=" cost efficient "> cost efficient </a> </p> <a href="https://publications.waset.org/abstracts/52194/integration-of-wireless-sensor-networks-and-radio-frequency-identification-rfid-an-assesment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52194.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">335</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4065</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4064</span> Facility Detection from Image Using Mathematical Morphology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=In-Geun%20Lim">In-Geun Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Woong%20Ra"> Sung-Woong Ra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As high resolution satellite images can be used, lots of studies are carried out for exploiting these images in various fields. This paper proposes the method based on mathematical morphology for extracting the ‘horse's hoof shaped object’. This proposed method can make an automatic object detection system to track the meaningful object in a large satellite image rapidly. Mathematical morphology process can apply in binary image, so this method is very simple. Therefore this method can easily extract the ‘horse's hoof shaped object’ from any images which have indistinct edges of the tracking object and have different image qualities depending on filming location, filming time, and filming environment. Using the proposed method by which ‘horse's hoof shaped object’ can be rapidly extracted, the performance of the automatic object detection system can be improved dramatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facility%20detection" title="facility detection">facility detection</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title=" satellite image"> satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=object" title=" object"> object</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20morphology" title=" mathematical morphology"> mathematical morphology</a> </p> <a href="https://publications.waset.org/abstracts/67611/facility-detection-from-image-using-mathematical-morphology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67611.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4063</span> Calculation of the Added Mass of a Submerged Object with Variable Sizes at Different Distances from the Wall via Lattice Boltzmann Simulations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nastaran%20Ahmadpour%20Samani">Nastaran Ahmadpour Samani</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahram%20Talebi"> Shahram Talebi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Added mass is an important quantity in analysis of the motion of a submerged object ,which can be calculated by solving the equation of potential flow around the object . Here, we consider systems in which a square object is submerged in a channel of fluid and moves parallel to the wall. The corresponding added mass at a given distance from the wall d and for the object size s (which is the side of square object) is calculated via lattice Blotzmann simulation . By changing d and s separately, their effect on the added mass is studied systematically. The simulation results reveal that for the systems in which d > 4s, the distance does not influence the added mass any more. The added mass increases when the object approaches the wall and reaches its maximum value as it moves on the wall (d -- > 0). In this case, the added mass is about 73% larger than which of the case d=4s. In addition, it is observed that the added mass increases by increasing of the object size s and vice versa. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lattice%20Boltzmann%20simulation" title="Lattice Boltzmann simulation ">Lattice Boltzmann simulation </a>, <a href="https://publications.waset.org/abstracts/search?q=added%20mass" title=" added mass"> added mass</a>, <a href="https://publications.waset.org/abstracts/search?q=square" title=" square"> square</a>, <a href="https://publications.waset.org/abstracts/search?q=variable%20size" title=" variable size"> variable size</a> </p> <a href="https://publications.waset.org/abstracts/22399/calculation-of-the-added-mass-of-a-submerged-object-with-variable-sizes-at-different-distances-from-the-wall-via-lattice-boltzmann-simulations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22399.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4062</span> Adaptive Online Object Tracking via Positive and Negative Models Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li">Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang"> Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20drift" title=" tracking drift"> tracking drift</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20squares%20analysis" title=" partial least squares analysis"> partial least squares analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20and%20negative%20models%20matching" title=" positive and negative models matching"> positive and negative models matching</a> </p> <a href="https://publications.waset.org/abstracts/19382/adaptive-online-object-tracking-via-positive-and-negative-models-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">531</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4061</span> 6D Posture Estimation of Road Vehicles from Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoshimoto%20Kurihara">Yoshimoto Kurihara</a>, <a href="https://publications.waset.org/abstracts/search?q=Tad%20Gonsalves"> Tad Gonsalves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=6D%20posture%20estimation" title="6D posture estimation">6D posture estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a> </p> <a href="https://publications.waset.org/abstracts/138449/6d-posture-estimation-of-road-vehicles-from-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">157</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4060</span> Detect Circles in Image: Using Statistical Image Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fathi%20M.%20O.%20Hamed">Fathi M. O. Hamed</a>, <a href="https://publications.waset.org/abstracts/search?q=Salma%20F.%20Elkofhaifee"> Salma F. Elkofhaifee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this work is to detect geometrical shape objects in an image. In this paper, the object is considered to be as a circle shape. The identification requires find three characteristics, which are number, size, and location of the object. To achieve the goal of this work, this paper presents an algorithm that combines from some of statistical approaches and image analysis techniques. This algorithm has been implemented to arrive at the major objectives in this paper. The algorithm has been evaluated by using simulated data, and yields good results, and then it has been applied to real data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20filter" title=" median filter"> median filter</a>, <a href="https://publications.waset.org/abstracts/search?q=projection" title=" projection"> projection</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-space" title=" scale-space"> scale-space</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=threshold" title=" threshold"> threshold</a> </p> <a href="https://publications.waset.org/abstracts/37141/detect-circles-in-image-using-statistical-image-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37141.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4059</span> Geospatial Techniques and VHR Imagery Use for Identification and Classification of Slums in Gujrat City, Pakistan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Ameer%20Nawaz%20Akram">Muhammad Ameer Nawaz Akram</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 21st century has been revealed that many individuals around the world are living in urban settlements than in rural zones. The evolution of numerous cities in emerging and newly developed countries is accompanied by the rise of slums. The precise definition of a slum varies countries to countries, but the universal harmony is that slums are dilapidated settlements facing severe poverty and have lacked access to sanitation, water, electricity, good living styles, and land tenure. The slum settlements always vary in unique patterns within and among the countries and cities. The core objective of this study is the spatial identification and classification of slums in Gujrat city Pakistan from very high-resolution GeoEye-1 (0.41m) satellite imagery. Slums were first identified using GPS for sample site identification and ground-truthing; through this process, 425 slums were identified. Then Object-Oriented Analysis (OOA) was applied to classify slums on digital image. Spatial analysis softwares, e.g., ArcGIS 10.3, Erdas Imagine 9.3, and Envi 5.1, were used for processing data and performing the analysis. Results show that OOA provides up to 90% accuracy for the identification of slums. Jalal Cheema and Allah Ho colonies are severely affected by slum settlements. The ratio of criminal activities is also higher here than in other areas. Slums are increasing with the passage of time in urban areas, and they will be like a hazardous problem in coming future. So now, the executive bodies need to make effective policies and move towards the amelioration process of the city. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=slums" title="slums">slums</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20oriented%20analysis" title=" object oriented analysis"> object oriented analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=zonal%20change%20detection" title=" zonal change detection"> zonal change detection</a> </p> <a href="https://publications.waset.org/abstracts/119513/geospatial-techniques-and-vhr-imagery-use-for-identification-and-classification-of-slums-in-gujrat-city-pakistan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119513.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4058</span> A Fast Calculation Approach for Position Identification in a Distance Space</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dawei%20Cai">Dawei Cai</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuya%20Tokuda"> Yuya Tokuda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The market of localization based service (LBS) is expanding. The acquisition of physical location is the fundamental basis for LBS. GPS, the de facto standard for outdoor localization, does not work well in indoor environment due to the blocking of signals by walls and ceiling. To acquire high accurate localization in an indoor environment, many techniques have been developed. Triangulation approach is often used for identifying the location, but a heavy and complex computation is necessary to calculate the location of the distances between the object and several source points. This computation is also time and power consumption, and not favorable to a mobile device that needs a long action life with battery. To provide a low power consumption approach for a mobile device, this paper presents a fast calculation approach to identify the location of the object without online solving solutions to simultaneous quadratic equations. In our approach, we divide the location identification into two parts, one is offline, and other is online. In offline mode, we make a mapping process that maps the location area to distance space and find a simple formula that can be used to identify the location of the object online with very light computation. The characteristic of the approach is a good tradeoff between the accuracy and computational amount. Therefore, this approach can be used in smartphone and other mobile devices that need a long work time. To show the performance, some simulation experimental results are provided also in the paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=indoor%20localization" title="indoor localization">indoor localization</a>, <a href="https://publications.waset.org/abstracts/search?q=location%20based%20service" title=" location based service"> location based service</a>, <a href="https://publications.waset.org/abstracts/search?q=triangulation" title=" triangulation"> triangulation</a>, <a href="https://publications.waset.org/abstracts/search?q=fast%20calculation" title=" fast calculation"> fast calculation</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20device" title=" mobile device"> mobile device</a> </p> <a href="https://publications.waset.org/abstracts/86046/a-fast-calculation-approach-for-position-identification-in-a-distance-space" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86046.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4057</span> UAV Based Visual Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Dalmia">Vaibhav Dalmia</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Phirke"> Manoj Phirke</a>, <a href="https://publications.waset.org/abstracts/search?q=Renith%20G"> Renith G</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide adoption of UAVs (unmanned aerial vehicles) in various industries by the government as well as private corporations for solving computer vision tasks it’s necessary that their potential is analyzed completely. Recent advances in Deep Learning have also left us with a plethora of algorithms to solve different computer vision tasks. This study provides a comprehensive survey on solving the Visual Object Tracking problem and explains the tradeoffs involved in building a real-time yet reasonably accurate object tracking system for UAVs by looking at existing methods and evaluating them on the aerial datasets. Finally, the best trackers suitable for UAV-based applications are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=single%20object%20tracking" title=" single object tracking"> single object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20object%20tracking" title=" visual object tracking"> visual object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=UAVs" title=" UAVs"> UAVs</a> </p> <a href="https://publications.waset.org/abstracts/145331/uav-based-visual-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4056</span> Object-Oriented Modeling Simulation and Control of Activated Sludge Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Fernandez%20de%20Canete">J. Fernandez de Canete</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Del%20Saz%20Orozco"> P. Del Saz Orozco</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Garcia-Moral"> I. Garcia-Moral</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Akhrymenka"> A. Akhrymenka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object-oriented modeling is spreading in current simulation of wastewater treatments plants through the use of the individual components of the process and its relations to define the underlying dynamic equations. In this paper, we describe the use of the free-software OpenModelica simulation environment for the object-oriented modeling of an activated sludge process under feedback control. The performance of the controlled system was analyzed both under normal conditions and in the presence of disturbances. The object-oriented described approach represents a valuable tool in teaching provides a practical insight in wastewater process control field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object-oriented%20programming" title="object-oriented programming">object-oriented programming</a>, <a href="https://publications.waset.org/abstracts/search?q=activated%20sludge%20process" title=" activated sludge process"> activated sludge process</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenModelica" title=" OpenModelica"> OpenModelica</a>, <a href="https://publications.waset.org/abstracts/search?q=feedback%20control" title=" feedback control"> feedback control</a> </p> <a href="https://publications.waset.org/abstracts/47240/object-oriented-modeling-simulation-and-control-of-activated-sludge-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4055</span> Mosaic Augmentation: Insights and Limitations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Olivia%20A.%20Kjorlien">Olivia A. Kjorlien</a>, <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Asghari"> Maryam Asghari</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Alizadeh-Shabdiz"> Farshid Alizadeh-Shabdiz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this paper is to investigate the impact of mosaic augmentation on the performance of object detection solutions. To carry out the study, YOLOv4 and YOLOv4-Tiny models have been selected, which are popular, advanced object detection models. These models are also representatives of two classes of complex and simple models. The study also has been carried out on two categories of objects, simple and complex. For this study, YOLOv4 and YOLOv4 Tiny are trained with and without mosaic augmentation for two sets of objects. While mosaic augmentation improves the performance of simple object detection, it deteriorates the performance of complex object detection, specifically having the largest negative impact on the false positive rate in a complex object detection case. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy" title="accuracy">accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positives" title=" false positives"> false positives</a>, <a href="https://publications.waset.org/abstracts/search?q=mosaic%20augmentation" title=" mosaic augmentation"> mosaic augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4" title=" YOLOV4"> YOLOV4</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4-Tiny" title=" YOLOV4-Tiny"> YOLOV4-Tiny</a> </p> <a href="https://publications.waset.org/abstracts/162634/mosaic-augmentation-insights-and-limitations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162634.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4054</span> On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hidouri%20Sami">Hidouri Sami</a>, <a href="https://publications.waset.org/abstracts/search?q=Aguili%20Taoufik"> Aguili Taoufik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These two solutions are compared to results from the reference bibliography. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=method%20of%20auxiliary%20sources" title="method of auxiliary sources">method of auxiliary sources</a>, <a href="https://publications.waset.org/abstracts/search?q=scattering" title=" scattering"> scattering</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20object" title=" large object"> large object</a>, <a href="https://publications.waset.org/abstracts/search?q=RCS" title=" RCS"> RCS</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20resources" title=" computational resources"> computational resources</a> </p> <a href="https://publications.waset.org/abstracts/38516/on-the-study-of-the-electromagnetic-scattering-by-large-obstacle-based-on-the-method-of-auxiliary-sources" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">244</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4053</span> Vehicular Speed Detection Camera System Using Video Stream</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20A.%20Anser%20Pasha">C. A. Anser Pasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/45316/vehicular-speed-detection-camera-system-using-video-stream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45316.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">468</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4052</span> An Advanced YOLOv8 for Vehicle Detection in Intelligent Traffic Management</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Degale%20Desta">A. Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Vehicle detection accuracy is critical to intelligent transportation systems and autonomous driving. The state-of-the-art object identification technology YOLOv8 has shown significant gains in efficiency and detection accuracy. This study uses the BDD100K dataset, which is renowned for its extensive and varied annotations, to assess how well YOLOv8 performs in vehicle detection. Objectives: The primary objective of this research is to assess YOLOv8's performance in intelligent transportation system vehicle identification and its ability to accurately identify cars in urban environments for safety prioritization. Methods: The primary objective of this research is to assess YOLOv8's performance in intelligent transportation system vehicle identification and its ability to accurately identify cars in urban environments for safety prioritization. Results: The results show that YOLOv8 achieves high mAP, recall, precision, and F1-score values, indicating state-of-the-art performance. This suggests that YOLOv8 can identify cars in complex urban environments with a high degree of accuracy and reliable results in a variety of traffic scenarios. Conclusion: The results indicate that YOLOv8 is a useful tool for enhancing vehicle detection accuracy in intelligent transportation systems, hence advancing urban public safety and security. The model's demonstrated performance shows how well it may be incorporated into autonomous driving applications to improve situational awareness and responsiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title="vehicle detection">vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title=" YOLOv8"> YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=BDD100K" title=" BDD100K"> BDD100K</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/195890/an-advanced-yolov8-for-vehicle-detection-in-intelligent-traffic-management" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/195890.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">8</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4051</span> Global Based Histogram for 3D Object Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somar%20Boubou">Somar Boubou</a>, <a href="https://publications.waset.org/abstracts/search?q=Tatsuo%20Narikiyo"> Tatsuo Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Michihiro%20Kawanishi"> Michihiro Kawanishi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we address the problem of 3D object recognition with depth sensors such as Kinect or Structure sensor. Compared with traditional approaches based on local descriptors, which depends on local information around the object key points, we propose a global features based descriptor. Proposed descriptor, which we name as Differential Histogram of Normal Vectors (DHONV), is designed particularly to capture the surface geometric characteristics of the 3D objects represented by depth images. We describe the 3D surface of an object in each frame using a 2D spatial histogram capturing the normalized distribution of differential angles of the surface normal vectors. The object recognition experiments on the benchmark RGB-D object dataset and a self-collected dataset show that our proposed descriptor outperforms two others descriptors based on spin-images and histogram of normal vectors with linear-SVM classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision%20in%20control" title="vision in control">vision in control</a>, <a href="https://publications.waset.org/abstracts/search?q=robotics" title=" robotics"> robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20histogram%20of%20normal%20vectors" title=" differential histogram of normal vectors"> differential histogram of normal vectors</a> </p> <a href="https://publications.waset.org/abstracts/47486/global-based-histogram-for-3d-object-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4050</span> Deep Learning Application for Object Image Recognition and Robot Automatic Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiuh-Jer%20Huang">Shiuh-Jer Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Zon%20Yan"> Chen-Zon Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20K.%20Huang"> C. K. Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Chien%20Ting"> Chun-Chien Ting</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv2" title=" YOLOv2"> YOLOv2</a>, <a href="https://publications.waset.org/abstracts/search?q=7A6%20series%20manipulator" title=" 7A6 series manipulator"> 7A6 series manipulator</a> </p> <a href="https://publications.waset.org/abstracts/110468/deep-learning-application-for-object-image-recognition-and-robot-automatic-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110468.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=135">135</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=136">136</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Object%20Identification&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>