CINXE.COM
Search results for: object
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: object</title> <meta name="description" content="Search results for: object"> <meta name="keywords" content="object"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="object" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="object"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1214</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: object</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1214</span> Canonical Objects and Other Objects in Arabic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Safiah%20Ahmed%20Madkhali">Safiah Ahmed Madkhali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The grammatical relation object has not attracted the same attention in the literature as subject has. Where there is a clearly monotransitive verb such as kick, the criteria for identifying the grammatical relation may converge. However, the term object is also used to refer to phenomena that do not subsume all, or even most, of the recognized properties of the canonical object. Instances of such phenomena include non-canonical objects such as the ones in the so-called double-object construction i.e. the indirect object and the direct object as in (He bought his dog a new collar). In this paper, it is demonstrated how criteria of identifying the grammatical relation object that are found in the theoretical and typological literature can be applied to Arabic. Also, further language-specific criteria are here derived from the regularities of the canonical object in the language. The criteria established in this way are then applied to the non-canonical objects to demonstrate how far they conform to, or diverge from, the canonical object. Contrary to the claim that the direct object is more similar to the canonical object than is the indirect object, it was found that it is, in fact, the indirect object rather than the direct object that shares most of the aspects of the canonical object in monotransitive clauses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canonical%20objects" title="canonical objects">canonical objects</a>, <a href="https://publications.waset.org/abstracts/search?q=double-object%20constructions" title=" double-object constructions"> double-object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=cognate%20object%20constructions" title=" cognate object constructions"> cognate object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=non-canonical%20objects" title=" non-canonical objects"> non-canonical objects</a> </p> <a href="https://publications.waset.org/abstracts/141579/canonical-objects-and-other-objects-in-arabic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1213</span> When Pain Becomes Love For God: The Non-Object Self</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Roni%20Naor-Hofri">Roni Naor-Hofri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper shows how self-inflicted pain enabled the expression of love for God among Christian monastic ascetics in medieval central Europe. As scholars have shown, being in a state of pain leads to a change in or destruction of language, an essential feature of the self. The author argues that this transformation allows the self to transcend its boundaries as an object, even if only temporarily and in part. The epistemic achievement of love for God, a non-object, would not otherwise have been possible. To substantiate her argument, the author shows that the self’s transformation into a non-object enables the imitation of God: not solely in the sense of imitatio Christi, of physical and visual representations of God incarnate in the flesh of His son Christ, but also in the sense of the self’s experience of being a non-object, just like God, the target of the self’s love. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=love%20for%20God" title="love for God ">love for God </a>, <a href="https://publications.waset.org/abstracts/search?q=pain" title=" pain"> pain</a>, <a href="https://publications.waset.org/abstracts/search?q=philosophy" title=" philosophy"> philosophy</a>, <a href="https://publications.waset.org/abstracts/search?q=religion" title=" religion"> religion</a> </p> <a href="https://publications.waset.org/abstracts/135417/when-pain-becomes-love-for-god-the-non-object-self" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135417.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">243</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1212</span> Pose Normalization Network for Object Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Shen">Bingquan Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20classification" title=" object classification"> object classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20normalization" title=" pose normalization"> pose normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=viewpoint%20invariant" title=" viewpoint invariant"> viewpoint invariant</a> </p> <a href="https://publications.waset.org/abstracts/56852/pose-normalization-network-for-object-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56852.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1211</span> Multichannel Object Detection with Event Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Iliasov">Rafael Iliasov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Golkar"> Alessandro Golkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection based on event vision has been a dynamically growing field in computer vision for the last 16 years. In this work, we create multiple channels from a single event camera and propose an event fusion method (EFM) to enhance object detection in event-based vision systems. Each channel uses a different accumulation buffer to collect events from the event camera. We implement YOLOv7 for object detection, followed by a fusion algorithm. Our multichannel approach outperforms single-channel-based object detection by 0.7% in mean Average Precision (mAP) for detection overlapping ground truth with IOU = 0.5. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20camera" title="event camera">event camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20with%20multimodal%20inputs" title=" object detection with multimodal inputs"> object detection with multimodal inputs</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20fusion" title=" multichannel fusion"> multichannel fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/190247/multichannel-object-detection-with-event-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1210</span> The Study on How Social Cues in a Scene Modulate Basic Object Recognition Proces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shih-Yu%20Lo">Shih-Yu Lo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stereotypes exist in almost every society, affecting how people interact with each other. However, to our knowledge, the influence of stereotypes was rarely explored in the context of basic perceptual processes. This study aims to explore how the gender stereotype affects object recognition. Participants were presented with a series of scene pictures, followed by a target display with a man or a woman, holding a weapon or a non-weapon object. The task was to identify whether the object in the target display was a weapon or not. Although the gender of the object holder could not predict whether he or she held a weapon, and was irrelevant to the task goal, the participant nevertheless tended to identify the object as a weapon when the object holder was a man than a woman. The analysis based on the signal detection theory showed that the stereotype effect on object recognition mainly resulted from the participant’s bias to make a 'weapon' response when a man was in the scene instead of a woman in the scene. In addition, there was a trend that the participant’s sensitivity to differentiate a weapon from a non-threating object was higher when a woman was in the scene than a man was in the scene. The results of this study suggest that the irrelevant social cues implied in the visual scene can be very powerful that they can modulate the basic object recognition process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gender%20stereotype" title="gender stereotype">gender stereotype</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20detection%20theory" title=" signal detection theory"> signal detection theory</a>, <a href="https://publications.waset.org/abstracts/search?q=weapon" title=" weapon"> weapon</a> </p> <a href="https://publications.waset.org/abstracts/92535/the-study-on-how-social-cues-in-a-scene-modulate-basic-object-recognition-proces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">209</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1209</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1208</span> Facility Detection from Image Using Mathematical Morphology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=In-Geun%20Lim">In-Geun Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Woong%20Ra"> Sung-Woong Ra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As high resolution satellite images can be used, lots of studies are carried out for exploiting these images in various fields. This paper proposes the method based on mathematical morphology for extracting the ‘horse's hoof shaped object’. This proposed method can make an automatic object detection system to track the meaningful object in a large satellite image rapidly. Mathematical morphology process can apply in binary image, so this method is very simple. Therefore this method can easily extract the ‘horse's hoof shaped object’ from any images which have indistinct edges of the tracking object and have different image qualities depending on filming location, filming time, and filming environment. Using the proposed method by which ‘horse's hoof shaped object’ can be rapidly extracted, the performance of the automatic object detection system can be improved dramatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facility%20detection" title="facility detection">facility detection</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title=" satellite image"> satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=object" title=" object"> object</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20morphology" title=" mathematical morphology"> mathematical morphology</a> </p> <a href="https://publications.waset.org/abstracts/67611/facility-detection-from-image-using-mathematical-morphology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67611.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1207</span> Calculation of the Added Mass of a Submerged Object with Variable Sizes at Different Distances from the Wall via Lattice Boltzmann Simulations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nastaran%20Ahmadpour%20Samani">Nastaran Ahmadpour Samani</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahram%20Talebi"> Shahram Talebi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Added mass is an important quantity in analysis of the motion of a submerged object ,which can be calculated by solving the equation of potential flow around the object . Here, we consider systems in which a square object is submerged in a channel of fluid and moves parallel to the wall. The corresponding added mass at a given distance from the wall d and for the object size s (which is the side of square object) is calculated via lattice Blotzmann simulation . By changing d and s separately, their effect on the added mass is studied systematically. The simulation results reveal that for the systems in which d > 4s, the distance does not influence the added mass any more. The added mass increases when the object approaches the wall and reaches its maximum value as it moves on the wall (d -- > 0). In this case, the added mass is about 73% larger than which of the case d=4s. In addition, it is observed that the added mass increases by increasing of the object size s and vice versa. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lattice%20Boltzmann%20simulation" title="Lattice Boltzmann simulation ">Lattice Boltzmann simulation </a>, <a href="https://publications.waset.org/abstracts/search?q=added%20mass" title=" added mass"> added mass</a>, <a href="https://publications.waset.org/abstracts/search?q=square" title=" square"> square</a>, <a href="https://publications.waset.org/abstracts/search?q=variable%20size" title=" variable size"> variable size</a> </p> <a href="https://publications.waset.org/abstracts/22399/calculation-of-the-added-mass-of-a-submerged-object-with-variable-sizes-at-different-distances-from-the-wall-via-lattice-boltzmann-simulations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22399.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1206</span> Adaptive Online Object Tracking via Positive and Negative Models Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li">Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang"> Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20drift" title=" tracking drift"> tracking drift</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20squares%20analysis" title=" partial least squares analysis"> partial least squares analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20and%20negative%20models%20matching" title=" positive and negative models matching"> positive and negative models matching</a> </p> <a href="https://publications.waset.org/abstracts/19382/adaptive-online-object-tracking-via-positive-and-negative-models-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1205</span> 6D Posture Estimation of Road Vehicles from Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoshimoto%20Kurihara">Yoshimoto Kurihara</a>, <a href="https://publications.waset.org/abstracts/search?q=Tad%20Gonsalves"> Tad Gonsalves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=6D%20posture%20estimation" title="6D posture estimation">6D posture estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a> </p> <a href="https://publications.waset.org/abstracts/138449/6d-posture-estimation-of-road-vehicles-from-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1204</span> Object-Oriented Program Comprehension by Identification of Software Components and Their Connexions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelhak-Djamel%20Seriai">Abdelhak-Djamel Seriai</a>, <a href="https://publications.waset.org/abstracts/search?q=Selim%20Kebir"> Selim Kebir</a>, <a href="https://publications.waset.org/abstracts/search?q=Allaoua%20Chaoui"> Allaoua Chaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the last decades, object oriented program- ming has been massively used to build large-scale systems. However, evolution and maintenance of such systems become a laborious task because of the lack of object oriented programming to offer a precise view of the functional building blocks of the system. This lack is caused by the fine granularity of classes and objects. In this paper, we use a post object-oriented technology namely software components, to propose an approach based on the identification of the functional building blocks of an object oriented system by analyzing its source code. These functional blocks are specified as software components and the result is a multi-layer component based software architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=software%20comprehension" title="software comprehension">software comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20component" title=" software component"> software component</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20oriented" title=" object oriented"> object oriented</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20architecture" title=" software architecture"> software architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=reverse%20engineering" title=" reverse engineering"> reverse engineering</a> </p> <a href="https://publications.waset.org/abstracts/32119/object-oriented-program-comprehension-by-identification-of-software-components-and-their-connexions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32119.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1203</span> UAV Based Visual Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Dalmia">Vaibhav Dalmia</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Phirke"> Manoj Phirke</a>, <a href="https://publications.waset.org/abstracts/search?q=Renith%20G"> Renith G</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide adoption of UAVs (unmanned aerial vehicles) in various industries by the government as well as private corporations for solving computer vision tasks it’s necessary that their potential is analyzed completely. Recent advances in Deep Learning have also left us with a plethora of algorithms to solve different computer vision tasks. This study provides a comprehensive survey on solving the Visual Object Tracking problem and explains the tradeoffs involved in building a real-time yet reasonably accurate object tracking system for UAVs by looking at existing methods and evaluating them on the aerial datasets. Finally, the best trackers suitable for UAV-based applications are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=single%20object%20tracking" title=" single object tracking"> single object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20object%20tracking" title=" visual object tracking"> visual object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=UAVs" title=" UAVs"> UAVs</a> </p> <a href="https://publications.waset.org/abstracts/145331/uav-based-visual-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1202</span> Object-Oriented Modeling Simulation and Control of Activated Sludge Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Fernandez%20de%20Canete">J. Fernandez de Canete</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Del%20Saz%20Orozco"> P. Del Saz Orozco</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Garcia-Moral"> I. Garcia-Moral</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Akhrymenka"> A. Akhrymenka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object-oriented modeling is spreading in current simulation of wastewater treatments plants through the use of the individual components of the process and its relations to define the underlying dynamic equations. In this paper, we describe the use of the free-software OpenModelica simulation environment for the object-oriented modeling of an activated sludge process under feedback control. The performance of the controlled system was analyzed both under normal conditions and in the presence of disturbances. The object-oriented described approach represents a valuable tool in teaching provides a practical insight in wastewater process control field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object-oriented%20programming" title="object-oriented programming">object-oriented programming</a>, <a href="https://publications.waset.org/abstracts/search?q=activated%20sludge%20process" title=" activated sludge process"> activated sludge process</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenModelica" title=" OpenModelica"> OpenModelica</a>, <a href="https://publications.waset.org/abstracts/search?q=feedback%20control" title=" feedback control"> feedback control</a> </p> <a href="https://publications.waset.org/abstracts/47240/object-oriented-modeling-simulation-and-control-of-activated-sludge-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1201</span> Mosaic Augmentation: Insights and Limitations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Olivia%20A.%20Kjorlien">Olivia A. Kjorlien</a>, <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Asghari"> Maryam Asghari</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Alizadeh-Shabdiz"> Farshid Alizadeh-Shabdiz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this paper is to investigate the impact of mosaic augmentation on the performance of object detection solutions. To carry out the study, YOLOv4 and YOLOv4-Tiny models have been selected, which are popular, advanced object detection models. These models are also representatives of two classes of complex and simple models. The study also has been carried out on two categories of objects, simple and complex. For this study, YOLOv4 and YOLOv4 Tiny are trained with and without mosaic augmentation for two sets of objects. While mosaic augmentation improves the performance of simple object detection, it deteriorates the performance of complex object detection, specifically having the largest negative impact on the false positive rate in a complex object detection case. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy" title="accuracy">accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positives" title=" false positives"> false positives</a>, <a href="https://publications.waset.org/abstracts/search?q=mosaic%20augmentation" title=" mosaic augmentation"> mosaic augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4" title=" YOLOV4"> YOLOV4</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4-Tiny" title=" YOLOV4-Tiny"> YOLOV4-Tiny</a> </p> <a href="https://publications.waset.org/abstracts/162634/mosaic-augmentation-insights-and-limitations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162634.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1200</span> On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hidouri%20Sami">Hidouri Sami</a>, <a href="https://publications.waset.org/abstracts/search?q=Aguili%20Taoufik"> Aguili Taoufik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These two solutions are compared to results from the reference bibliography. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=method%20of%20auxiliary%20sources" title="method of auxiliary sources">method of auxiliary sources</a>, <a href="https://publications.waset.org/abstracts/search?q=scattering" title=" scattering"> scattering</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20object" title=" large object"> large object</a>, <a href="https://publications.waset.org/abstracts/search?q=RCS" title=" RCS"> RCS</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20resources" title=" computational resources"> computational resources</a> </p> <a href="https://publications.waset.org/abstracts/38516/on-the-study-of-the-electromagnetic-scattering-by-large-obstacle-based-on-the-method-of-auxiliary-sources" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1199</span> Vehicular Speed Detection Camera System Using Video Stream</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20A.%20Anser%20Pasha">C. A. Anser Pasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/45316/vehicular-speed-detection-camera-system-using-video-stream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45316.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1198</span> Global Based Histogram for 3D Object Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somar%20Boubou">Somar Boubou</a>, <a href="https://publications.waset.org/abstracts/search?q=Tatsuo%20Narikiyo"> Tatsuo Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Michihiro%20Kawanishi"> Michihiro Kawanishi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we address the problem of 3D object recognition with depth sensors such as Kinect or Structure sensor. Compared with traditional approaches based on local descriptors, which depends on local information around the object key points, we propose a global features based descriptor. Proposed descriptor, which we name as Differential Histogram of Normal Vectors (DHONV), is designed particularly to capture the surface geometric characteristics of the 3D objects represented by depth images. We describe the 3D surface of an object in each frame using a 2D spatial histogram capturing the normalized distribution of differential angles of the surface normal vectors. The object recognition experiments on the benchmark RGB-D object dataset and a self-collected dataset show that our proposed descriptor outperforms two others descriptors based on spin-images and histogram of normal vectors with linear-SVM classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision%20in%20control" title="vision in control">vision in control</a>, <a href="https://publications.waset.org/abstracts/search?q=robotics" title=" robotics"> robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20histogram%20of%20normal%20vectors" title=" differential histogram of normal vectors"> differential histogram of normal vectors</a> </p> <a href="https://publications.waset.org/abstracts/47486/global-based-histogram-for-3d-object-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1197</span> Deep Learning Application for Object Image Recognition and Robot Automatic Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiuh-Jer%20Huang">Shiuh-Jer Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Zon%20Yan"> Chen-Zon Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20K.%20Huang"> C. K. Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Chien%20Ting"> Chun-Chien Ting</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv2" title=" YOLOv2"> YOLOv2</a>, <a href="https://publications.waset.org/abstracts/search?q=7A6%20series%20manipulator" title=" 7A6 series manipulator"> 7A6 series manipulator</a> </p> <a href="https://publications.waset.org/abstracts/110468/deep-learning-application-for-object-image-recognition-and-robot-automatic-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110468.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1196</span> An Approach from Fichte as a Response to the Kantian Dualism of Subject and Object: The Unity of the Subject and Object in Both Theoretical and Ethical Possibility</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mengjie%20Liu">Mengjie Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This essay aims at responding to the Kant arguments on how to fit the self-caused subject into the deterministic object which follows the natural laws. This essay mainly adopts the approach abstracted from Fichte’s “Wissenshaftslehre” (Doctrine of Science) to picture a possible solution to the conciliation of Kantian dualism. The Fichte approach is based on the unity of the theoretical and practical reason, which can be understood as a philosophical abstraction from ordinary experience combining both subject and object. This essay will discuss the general Kantian dualism problem and Fichte’s unity approach in the first part. Then the essay will elaborate on the achievement of this unity of the subject and object through Fichte’s “the I posits itself” process in the second section. The following third section is related to the ethical unity of subject and object based on the Fichte approach. The essay will also discuss the limitation of Fichte’s approach from two perspectives: (1) the theoretical possibility of the existence of the pure I and (2) Schelling’s statement that the Absolute I is a result rather than the originating act. This essay demonstrates a possible approach to unifying the subject and object supported by Fichte’s “Absolute I” and ethical theories and also points out the limitations of Fichte’s theories. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fichte" title="Fichte">Fichte</a>, <a href="https://publications.waset.org/abstracts/search?q=identity" title=" identity"> identity</a>, <a href="https://publications.waset.org/abstracts/search?q=Kantian%20dualism" title=" Kantian dualism"> Kantian dualism</a>, <a href="https://publications.waset.org/abstracts/search?q=Wissenshaftslehre" title=" Wissenshaftslehre"> Wissenshaftslehre</a> </p> <a href="https://publications.waset.org/abstracts/150645/an-approach-from-fichte-as-a-response-to-the-kantian-dualism-of-subject-and-object-the-unity-of-the-subject-and-object-in-both-theoretical-and-ethical-possibility" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1195</span> Active Space Debris Removal by Extreme Ultraviolet Radiation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Anandha%20Selvan">A. Anandha Selvan</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Malarvizhi"> B. Malarvizhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent year the problem of space debris have become very serious. The mass of the artificial objects in orbit increased quite steadily at the rate of about 145 metric tons annually, leading to a total tally of approximately 7000 metric tons. Now most of space debris object orbiting in LEO region about 97%. The catastrophic collision can be mostly occurred in LEO region, where this collision generate the new debris. Thus, we propose a concept for cleaning the space debris in the region of thermosphere by passing the Extreme Ultraviolet (EUV) radiation to in front of space debris object from the re-orbiter. So in our concept the Extreme Ultraviolet (EUV) radiation will create the thermosphere expansion by reacting with atmospheric gas particles. So the drag is produced in front of the space debris object by thermosphere expansion. This drag force is high enough to slow down the space debris object’s relative velocity. Therefore the space debris object gradually reducing the altitude and finally enter into the earth’s atmosphere. After the first target is removed, the re-orbiter can be goes into next target. This method remove the space debris object without catching debris object. Thus it can be applied to a wide range of debris object without regard to their shapes or rotation. This paper discusses the operation of re-orbiter for removing the space debris in thermosphere region. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20space%20debris%20removal" title="active space debris removal">active space debris removal</a>, <a href="https://publications.waset.org/abstracts/search?q=space%20debris" title=" space debris"> space debris</a>, <a href="https://publications.waset.org/abstracts/search?q=LEO" title=" LEO"> LEO</a>, <a href="https://publications.waset.org/abstracts/search?q=extreme%20ultraviolet" title=" extreme ultraviolet"> extreme ultraviolet</a>, <a href="https://publications.waset.org/abstracts/search?q=re-orbiter" title=" re-orbiter"> re-orbiter</a>, <a href="https://publications.waset.org/abstracts/search?q=thermosphere" title=" thermosphere"> thermosphere</a> </p> <a href="https://publications.waset.org/abstracts/20478/active-space-debris-removal-by-extreme-ultraviolet-radiation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20478.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1194</span> Genetic Algorithm Based Deep Learning Parameters Tuning for Robot Object Recognition and Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Delowar%20Hossain">Delowar Hossain</a>, <a href="https://publications.waset.org/abstracts/search?q=Genci%20Capi"> Genci Capi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper concerns with the problem of deep learning parameters tuning using a genetic algorithm (GA) in order to improve the performance of deep learning (DL) method. We present a GA based DL method for robot object recognition and grasping. GA is used to optimize the DL parameters in learning procedure in term of the fitness function that is good enough. After finishing the evolution process, we receive the optimal number of DL parameters. To evaluate the performance of our method, we consider the object recognition and robot grasping tasks. Experimental results show that our method is efficient for robot object recognition and grasping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20grasping" title=" robot grasping"> robot grasping</a> </p> <a href="https://publications.waset.org/abstracts/67943/genetic-algorithm-based-deep-learning-parameters-tuning-for-robot-object-recognition-and-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1193</span> Urban Land Cover from GF-2 Satellite Images Using Object Based and Neural Network Classifications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lamyaa%20Gamal%20El-Deen%20Taha">Lamyaa Gamal El-Deen Taha</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashraf%20Sharawi"> Ashraf Sharawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> China launched satellite GF-2 in 2014. This study deals with comparing nearest neighbor object-based classification and neural network classification methods for classification of the fused GF-2 image. Firstly, rectification of GF-2 image was performed. Secondly, a comparison between nearest neighbor object-based classification and neural network classification for classification of fused GF-2 was performed. Thirdly, the overall accuracy of classification and kappa index were calculated. Results indicate that nearest neighbor object-based classification is better than neural network classification for urban mapping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GF-2%20images" title="GF-2 images">GF-2 images</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction-rectification" title=" feature extraction-rectification"> feature extraction-rectification</a>, <a href="https://publications.waset.org/abstracts/search?q=nearest%20neighbour%20object%20based%20classification" title=" nearest neighbour object based classification"> nearest neighbour object based classification</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20algorithms" title=" segmentation algorithms"> segmentation algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network%20classification" title=" neural network classification"> neural network classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multilayer%20perceptron" title=" multilayer perceptron"> multilayer perceptron</a> </p> <a href="https://publications.waset.org/abstracts/84243/urban-land-cover-from-gf-2-satellite-images-using-object-based-and-neural-network-classifications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1192</span> Software Defined Storage: Object Storage over Hadoop Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amritesh%20Srivastava">Amritesh Srivastava</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaurav%20Sharma"> Gaurav Sharma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this project is to develop an open source object storage system that is highly durable, scalable and reliable. There are two representative systems in cloud computing: Google and Amazon. Their storage systems for Google GFS and Amazon S3 provide high reliability, performance and stability. Our proposed system is highly inspired from Amazon S3. We are using Hadoop Distributed File System (HDFS) Java API to implement our system. We propose the architecture of object storage system based on Hadoop. We discuss the requirements of our system, what we expect from our system and what problems we may encounter. We also give detailed design proposal along with the abstract source code to implement it. The final goal of the system is to provide REST based access to our object storage system that exists on top of HDFS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hadoop" title="Hadoop">Hadoop</a>, <a href="https://publications.waset.org/abstracts/search?q=HBase" title=" HBase"> HBase</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20storage" title=" object storage"> object storage</a>, <a href="https://publications.waset.org/abstracts/search?q=REST" title=" REST"> REST</a> </p> <a href="https://publications.waset.org/abstracts/54130/software-defined-storage-object-storage-over-hadoop-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54130.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1191</span> Object-Oriented Programming for Modeling and Simulation of Systems in Physiology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Fernandez%20de%20Canete">J. Fernandez de Canete</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object-oriented modeling is spreading in the current simulation of physiological systems through the use of the individual components of the model and its interconnections to define the underlying dynamic equations. In this paper, we describe the use of both the SIMSCAPE and MODELICA simulation environments in the object-oriented modeling of the closed-loop cardiovascular system. The performance of the controlled system was analyzed by simulation in light of the existing hypothesis and validation tests previously performed with physiological data. The described approach represents a valuable tool in the teaching of physiology for graduate medical students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object-oriented%20modeling" title="object-oriented modeling">object-oriented modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=SIMSCAPE%20simulation%20language" title=" SIMSCAPE simulation language"> SIMSCAPE simulation language</a>, <a href="https://publications.waset.org/abstracts/search?q=MODELICA%20simulation%20language" title=" MODELICA simulation language"> MODELICA simulation language</a>, <a href="https://publications.waset.org/abstracts/search?q=cardiovascular%20system" title=" cardiovascular system"> cardiovascular system</a> </p> <a href="https://publications.waset.org/abstracts/28645/object-oriented-programming-for-modeling-and-simulation-of-systems-in-physiology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1190</span> Theoretical Approaches to Graphic and Formal Generation from Evolutionary Genetics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Luz%20Estrada">Luz Estrada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The currents of evolutionary materialistic thought have argued that knowledge about an object is not obtained through the abstractive method. That is, the object cannot come to be understood if founded upon itself, nor does it take place by the encounter between form and matter. According to this affirmation, the research presented here identified as a problematic situation the absence of comprehension of the formal creation as a generative operation. This has been referred to as a recurrent lack in the production of objects and corresponds to the need to conceive the configurative process from the reality of its genesis. In this case, it is of interest to explore ways of creation that consider the object as if it were a living organism, as well as responding to the object’s experience as embodied in the designer since it unfolds its genesis simultaneously to the ways of existence of those who are involved in the generative experience. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=architecture" title="architecture">architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=theoretical%20graphics" title=" theoretical graphics"> theoretical graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=evolutionary%20genetics" title=" evolutionary genetics"> evolutionary genetics</a>, <a href="https://publications.waset.org/abstracts/search?q=formal%20perception" title=" formal perception"> formal perception</a> </p> <a href="https://publications.waset.org/abstracts/158586/theoretical-approaches-to-graphic-and-formal-generation-from-evolutionary-genetics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158586.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1189</span> A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mulugeta%20K.%20Tefera">Mulugeta K. Tefera</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaolong%20Yang"> Xiaolong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Liu"> Jian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20modeling" title="background modeling">background modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-frame%20difference" title=" inter-frame difference"> inter-frame difference</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20and%20tracking" title=" object detection and tracking"> object detection and tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/78578/a-real-time-moving-object-detection-and-tracking-scheme-and-its-implementation-for-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1188</span> Object-Oriented Multivariate Proportional-Integral-Derivative Control of Hydraulic Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Fernandez%20de%20Canete">J. Fernandez de Canete</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Fernandez-Calvo"> S. Fernandez-Calvo</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Garc%C3%ADa-Moral"> I. García-Moral</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents and discusses the application of the object-oriented modelling software SIMSCAPE to hydraulic systems, with particular reference to multivariable proportional-integral-derivative (PID) control. As a result, a particular modelling approach of a double cylinder-piston coupled system is proposed and motivated, and the SIMULINK based PID tuning tool has also been used to select the proper controller parameters. The paper demonstrates the usefulness of the object-oriented approach when both physical modelling and control are tackled. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object-oriented%20modeling" title="object-oriented modeling">object-oriented modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=multivariable%20hydraulic%20system" title=" multivariable hydraulic system"> multivariable hydraulic system</a>, <a href="https://publications.waset.org/abstracts/search?q=multivariable%20PID%20control" title=" multivariable PID control"> multivariable PID control</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20simulation" title=" computer simulation"> computer simulation</a> </p> <a href="https://publications.waset.org/abstracts/67799/object-oriented-multivariate-proportional-integral-derivative-control-of-hydraulic-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67799.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1187</span> Designing AI-Enabled Smart Maintenance Scheduler: Enhancing Object Reliability through Automated Management</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arun%20Prasad%20Jaganathan">Arun Prasad Jaganathan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today's rapidly evolving technological landscape, the need for efficient and proactive maintenance management solutions has become increasingly evident across various industries. Traditional approaches often suffer from drawbacks such as reactive strategies, leading to potential downtime, increased costs, and decreased operational efficiency. In response to these challenges, this paper proposes an AI-enabled approach to object-based maintenance management aimed at enhancing reliability and efficiency. The paper contributes to the growing body of research on AI-driven maintenance management systems, highlighting the transformative impact of intelligent technologies on enhancing object reliability and operational efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AI" title="AI">AI</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=predictive%20maintenance" title=" predictive maintenance"> predictive maintenance</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based%20maintenance" title=" object-based maintenance"> object-based maintenance</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20team%20scheduling" title=" expert team scheduling"> expert team scheduling</a> </p> <a href="https://publications.waset.org/abstracts/185812/designing-ai-enabled-smart-maintenance-scheduler-enhancing-object-reliability-through-automated-management" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185812.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">58</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1186</span> Contrastive Learning for Unsupervised Object Segmentation in Sequential Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tian%20Zhang">Tian Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unsupervised object segmentation aims at segmenting objects in sequential images and obtaining the mask of each object without any manual intervention. Unsupervised segmentation remains a challenging task due to the lack of prior knowledge about these objects. Previous methods often require manually specifying the action of each object, which is often difficult to obtain. Instead, this paper does not need action information of objects and automatically learns the actions and relations among objects from the structured environment. To obtain the object segmentation of sequential images, the relationships between objects and images are extracted to infer the action and interaction of objects based on the multi-head attention mechanism. Three types of objects’ relationships in the object segmentation task are proposed: the relationship between objects in the same frame, the relationship between objects in two frames, and the relationship between objects and historical information. Based on these relationships, the proposed model (1) is effective in multiple objects segmentation tasks, (2) just needs images as input, and (3) produces better segmentation results as more relationships are considered. The experimental results on multiple datasets show that this paper’s method achieves state-of-art performance. The quantitative and qualitative analyses of the result are conducted. The proposed method could be easily extended to other similar applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20object%20segmentation" title="unsupervised object segmentation">unsupervised object segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=contrastive%20learning" title=" contrastive learning"> contrastive learning</a>, <a href="https://publications.waset.org/abstracts/search?q=structured%20environment" title=" structured environment"> structured environment</a> </p> <a href="https://publications.waset.org/abstracts/148401/contrastive-learning-for-unsupervised-object-segmentation-in-sequential-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148401.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">109</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1185</span> Learning Object Interface Adapted to the Learner's Learning Style</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zenaide%20Carvalho%20da%20Silva">Zenaide Carvalho da Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Leandro%20Rodrigues%20Ferreira"> Leandro Rodrigues Ferreira</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrey%20Ricardo%20Pimentel"> Andrey Ricardo Pimentel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learning styles (LS) refer to the ways and forms that the student prefers to learn in the teaching and learning process. Each student has their own way of receiving and processing information throughout the learning process. Therefore, knowing their LS is important to better understand their individual learning preferences, and also, understand why the use of some teaching methods and techniques give better results with some students, while others it does not. We believe that knowledge of these styles enables the possibility of making propositions for teaching; thus, reorganizing teaching methods and techniques in order to allow learning that is adapted to the individual needs of the student. Adapting learning would be possible through the creation of online educational resources adapted to the style of the student. In this context, this article presents the structure of a learning object interface adaptation based on the LS. The structure created should enable the creation of the adapted learning object according to the student's LS and contributes to the increase of student’s motivation in the use of a learning object as an educational resource. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptation" title="adaptation">adaptation</a>, <a href="https://publications.waset.org/abstracts/search?q=interface" title=" interface"> interface</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20object" title=" learning object"> learning object</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20style" title=" learning style"> learning style</a> </p> <a href="https://publications.waset.org/abstracts/67882/learning-object-interface-adapted-to-the-learners-learning-style" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67882.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=40">40</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=41">41</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>