CINXE.COM

Search results for: object finder

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: object finder</title> <meta name="description" content="Search results for: object finder"> <meta name="keywords" content="object finder"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="object finder" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="object finder"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1231</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: object finder</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1231</span> Design and Implementation of a Bluetooth-Based Misplaced Object Finder Using DFRobot Arduino Interfaced with Led and Buzzer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bright%20Emeni">Bright Emeni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project is a system that allows users to locate their misplaced or lost devices by using Bluetooth technology. It utilizes the DFRobot Bettle BLE Arduino microcontroller as its main component for communication and control. By interfacing it with an LED and a buzzer, the system provides visual and auditory signals to assist in locating the target device. The search process can be initiated through an Android application, by which the system creates a Bluetooth connection between the microcontroller and the target device, permitting the exchange of signals for tracking purposes. When the device is within range, the LED indicator illuminates, and the buzzer produces audible alerts, guiding the user to the device's location. The application also provides an estimated distance of the object using Bluetooth signal strength. The project’s goal is to offer a practical and efficient solution for finding misplaced devices, leveraging the capabilities of Bluetooth technology and microcontroller-based control systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bluetooth%20finder" title="Bluetooth finder">Bluetooth finder</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20finder" title=" object finder"> object finder</a>, <a href="https://publications.waset.org/abstracts/search?q=Bluetooth%20tracking" title=" Bluetooth tracking"> Bluetooth tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracker" title=" tracker"> tracker</a> </p> <a href="https://publications.waset.org/abstracts/179777/design-and-implementation-of-a-bluetooth-based-misplaced-object-finder-using-dfrobot-arduino-interfaced-with-led-and-buzzer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1230</span> Effect of Different Parameters of Converging-Diverging Vortex Finders on Cyclone Separator Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=V.%20Kumar">V. Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Jha"> K. Jha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study is done to explore design modifications of the vortex finder, as it has a significant effect on the cyclone separator performance. It is evident that modifications of the vortex finder improve the performance of the cyclone separator significantly. The study conducted strives to improve the overall performance of cyclone separators by utilizing a converging-diverging (CD) vortex finder instead of the traditional uniform diameter vortex finders. The velocity and pressure fields inside a Stairmand cyclone separator with body diameter 0.29m and vortex finder diameter 0.1305m are calculated. The commercial software, Ansys Fluent v14.0 is used to simulate the flow field in a uniform diameter cyclone and six cyclones modified with CD vortex finders. Reynolds stress model is used to simulate the effects of turbulence on the fluid and particulate phases, discrete phase model is used to calculate the particle trajectories. The performance of the modified vortex finders is compared with the traditional vortex finder. The effects of the lengths of the converging and diverging sections, the throat diameter and the end diameters of the convergent divergent section are also studied to achieve enhanced performance. The pressure and velocity fields inside the vortex finder are presented by means of contour plots and velocity vectors and changes in the flow pattern due to variation of the geometrical variables are also analysed. Results indicate that a convergent-divergent vortex finder is capable of decreasing the pressure drop than that achieved through a uniform diameter vortex finder. It is also observed that the end diameters of the CD vortex finder, the throat diameter and the length of the diverging part of the vortex finder have a significant impact on the cyclone separator performance. Increase in the lower diameter of the vortex finder by 66% results in 11.5% decrease in the dimensionless pressure drop (Euler number) with 5.8% decrease in separation efficiency. Whereas 50% decrease in the throat diameter gives 5.9% increase in the Euler number with 10.2% increase in the separation efficiency and increasing the length of the diverging part gives 10.28% increase in the Euler number with 5.74% increase in the separation efficiency. Increasing the upper diameter of the CD vortex finder is seen to produce an adverse effect on the performance as it increases the pressure drop significantly and decreases the separation efficiency. Increase in length of the converging is not seen to affect the performance significantly. From the present study, it is concluded that convergent-divergent vortex finders can be used in place of uniform diameter vortex finders to achieve a better cyclone separator performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convergent-divergent%20vortex%20finder" title="convergent-divergent vortex finder">convergent-divergent vortex finder</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclone%20separator" title=" cyclone separator"> cyclone separator</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20phase%20modeling" title=" discrete phase modeling"> discrete phase modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Reynolds%20stress%20model" title=" Reynolds stress model"> Reynolds stress model</a> </p> <a href="https://publications.waset.org/abstracts/94779/effect-of-different-parameters-of-converging-diverging-vortex-finders-on-cyclone-separator-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94779.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">172</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1229</span> Canonical Objects and Other Objects in Arabic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Safiah%20Ahmed%20Madkhali">Safiah Ahmed Madkhali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The grammatical relation object has not attracted the same attention in the literature as subject has. Where there is a clearly monotransitive verb such as kick, the criteria for identifying the grammatical relation may converge. However, the term object is also used to refer to phenomena that do not subsume all, or even most, of the recognized properties of the canonical object. Instances of such phenomena include non-canonical objects such as the ones in the so-called double-object construction i.e. the indirect object and the direct object as in (He bought his dog a new collar). In this paper, it is demonstrated how criteria of identifying the grammatical relation object that are found in the theoretical and typological literature can be applied to Arabic. Also, further language-specific criteria are here derived from the regularities of the canonical object in the language. The criteria established in this way are then applied to the non-canonical objects to demonstrate how far they conform to, or diverge from, the canonical object. Contrary to the claim that the direct object is more similar to the canonical object than is the indirect object, it was found that it is, in fact, the indirect object rather than the direct object that shares most of the aspects of the canonical object in monotransitive clauses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canonical%20objects" title="canonical objects">canonical objects</a>, <a href="https://publications.waset.org/abstracts/search?q=double-object%20constructions" title=" double-object constructions"> double-object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=cognate%20object%20constructions" title=" cognate object constructions"> cognate object constructions</a>, <a href="https://publications.waset.org/abstracts/search?q=non-canonical%20objects" title=" non-canonical objects"> non-canonical objects</a> </p> <a href="https://publications.waset.org/abstracts/141579/canonical-objects-and-other-objects-in-arabic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1228</span> When Pain Becomes Love For God: The Non-Object Self</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Roni%20Naor-Hofri">Roni Naor-Hofri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper shows how self-inflicted pain enabled the expression of love for God among Christian monastic ascetics in medieval central Europe. As scholars have shown, being in a state of pain leads to a change in or destruction of language, an essential feature of the self. The author argues that this transformation allows the self to transcend its boundaries as an object, even if only temporarily and in part. The epistemic achievement of love for God, a non-object, would not otherwise have been possible. To substantiate her argument, the author shows that the self’s transformation into a non-object enables the imitation of God: not solely in the sense of imitatio Christi, of physical and visual representations of God incarnate in the flesh of His son Christ, but also in the sense of the self’s experience of being a non-object, just like God, the target of the self’s love. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=love%20for%20God" title="love for God ">love for God </a>, <a href="https://publications.waset.org/abstracts/search?q=pain" title=" pain"> pain</a>, <a href="https://publications.waset.org/abstracts/search?q=philosophy" title=" philosophy"> philosophy</a>, <a href="https://publications.waset.org/abstracts/search?q=religion" title=" religion"> religion</a> </p> <a href="https://publications.waset.org/abstracts/135417/when-pain-becomes-love-for-god-the-non-object-self" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135417.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">243</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1227</span> Pose Normalization Network for Object Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Shen">Bingquan Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20classification" title=" object classification"> object classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20normalization" title=" pose normalization"> pose normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=viewpoint%20invariant" title=" viewpoint invariant"> viewpoint invariant</a> </p> <a href="https://publications.waset.org/abstracts/56852/pose-normalization-network-for-object-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56852.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1226</span> Multichannel Object Detection with Event Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Iliasov">Rafael Iliasov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Golkar"> Alessandro Golkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection based on event vision has been a dynamically growing field in computer vision for the last 16 years. In this work, we create multiple channels from a single event camera and propose an event fusion method (EFM) to enhance object detection in event-based vision systems. Each channel uses a different accumulation buffer to collect events from the event camera. We implement YOLOv7 for object detection, followed by a fusion algorithm. Our multichannel approach outperforms single-channel-based object detection by 0.7% in mean Average Precision (mAP) for detection overlapping ground truth with IOU = 0.5. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20camera" title="event camera">event camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20with%20multimodal%20inputs" title=" object detection with multimodal inputs"> object detection with multimodal inputs</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20fusion" title=" multichannel fusion"> multichannel fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/190247/multichannel-object-detection-with-event-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1225</span> The Study on How Social Cues in a Scene Modulate Basic Object Recognition Proces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shih-Yu%20Lo">Shih-Yu Lo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stereotypes exist in almost every society, affecting how people interact with each other. However, to our knowledge, the influence of stereotypes was rarely explored in the context of basic perceptual processes. This study aims to explore how the gender stereotype affects object recognition. Participants were presented with a series of scene pictures, followed by a target display with a man or a woman, holding a weapon or a non-weapon object. The task was to identify whether the object in the target display was a weapon or not. Although the gender of the object holder could not predict whether he or she held a weapon, and was irrelevant to the task goal, the participant nevertheless tended to identify the object as a weapon when the object holder was a man than a woman. The analysis based on the signal detection theory showed that the stereotype effect on object recognition mainly resulted from the participant’s bias to make a 'weapon' response when a man was in the scene instead of a woman in the scene. In addition, there was a trend that the participant’s sensitivity to differentiate a weapon from a non-threating object was higher when a woman was in the scene than a man was in the scene. The results of this study suggest that the irrelevant social cues implied in the visual scene can be very powerful that they can modulate the basic object recognition process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gender%20stereotype" title="gender stereotype">gender stereotype</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20detection%20theory" title=" signal detection theory"> signal detection theory</a>, <a href="https://publications.waset.org/abstracts/search?q=weapon" title=" weapon"> weapon</a> </p> <a href="https://publications.waset.org/abstracts/92535/the-study-on-how-social-cues-in-a-scene-modulate-basic-object-recognition-proces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">209</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1224</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1223</span> Facility Detection from Image Using Mathematical Morphology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=In-Geun%20Lim">In-Geun Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Woong%20Ra"> Sung-Woong Ra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As high resolution satellite images can be used, lots of studies are carried out for exploiting these images in various fields. This paper proposes the method based on mathematical morphology for extracting the ‘horse's hoof shaped object’. This proposed method can make an automatic object detection system to track the meaningful object in a large satellite image rapidly. Mathematical morphology process can apply in binary image, so this method is very simple. Therefore this method can easily extract the ‘horse's hoof shaped object’ from any images which have indistinct edges of the tracking object and have different image qualities depending on filming location, filming time, and filming environment. Using the proposed method by which ‘horse's hoof shaped object’ can be rapidly extracted, the performance of the automatic object detection system can be improved dramatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facility%20detection" title="facility detection">facility detection</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title=" satellite image"> satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=object" title=" object"> object</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20morphology" title=" mathematical morphology"> mathematical morphology</a> </p> <a href="https://publications.waset.org/abstracts/67611/facility-detection-from-image-using-mathematical-morphology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67611.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">381</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1222</span> Calculation of the Added Mass of a Submerged Object with Variable Sizes at Different Distances from the Wall via Lattice Boltzmann Simulations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nastaran%20Ahmadpour%20Samani">Nastaran Ahmadpour Samani</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahram%20Talebi"> Shahram Talebi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Added mass is an important quantity in analysis of the motion of a submerged object ,which can be calculated by solving the equation of potential flow around the object . Here, we consider systems in which a square object is submerged in a channel of fluid and moves parallel to the wall. The corresponding added mass at a given distance from the wall d and for the object size s (which is the side of square object) is calculated via lattice Blotzmann simulation . By changing d and s separately, their effect on the added mass is studied systematically. The simulation results reveal that for the systems in which d > 4s, the distance does not influence the added mass any more. The added mass increases when the object approaches the wall and reaches its maximum value as it moves on the wall (d -- > 0). In this case, the added mass is about 73% larger than which of the case d=4s. In addition, it is observed that the added mass increases by increasing of the object size s and vice versa. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lattice%20Boltzmann%20simulation" title="Lattice Boltzmann simulation ">Lattice Boltzmann simulation </a>, <a href="https://publications.waset.org/abstracts/search?q=added%20mass" title=" added mass"> added mass</a>, <a href="https://publications.waset.org/abstracts/search?q=square" title=" square"> square</a>, <a href="https://publications.waset.org/abstracts/search?q=variable%20size" title=" variable size"> variable size</a> </p> <a href="https://publications.waset.org/abstracts/22399/calculation-of-the-added-mass-of-a-submerged-object-with-variable-sizes-at-different-distances-from-the-wall-via-lattice-boltzmann-simulations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22399.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1221</span> Real-Time Recognition of the Terrain Configuration to Improve Driving Stability for Unmanned Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bongsoo%20Jeon">Bongsoo Jeon</a>, <a href="https://publications.waset.org/abstracts/search?q=Jayoung%20Kim"> Jayoung Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jihong%20Lee"> Jihong Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Methods for measuring or estimating of ground shape by a laser range finder and a vision sensor (exteroceptive sensors) have critical weakness in terms that these methods need prior database built to distinguish acquired data as unique surface condition for driving. Also, ground information by exteroceptive sensors does not reflect the deflection of ground surface caused by the movement of UGVs. Therefore, this paper proposes a method of recognizing exact and precise ground shape using Inertial Measurement Unit (IMU) as a proprioceptive sensor. In this paper, firstly this method recognizes attitude of a robot in real-time using IMU and compensates attitude data of a robot with angle errors through analysis of vehicle dynamics. This method is verified by outdoor driving experiments of a real mobile robot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inertial%20measurement%20unit" title="inertial measurement unit">inertial measurement unit</a>, <a href="https://publications.waset.org/abstracts/search?q=laser%20range%20finder" title=" laser range finder"> laser range finder</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20recognition%20of%20the%20ground%20shape" title=" real-time recognition of the ground shape"> real-time recognition of the ground shape</a>, <a href="https://publications.waset.org/abstracts/search?q=proprioceptive%20sensor" title=" proprioceptive sensor"> proprioceptive sensor</a> </p> <a href="https://publications.waset.org/abstracts/2646/real-time-recognition-of-the-terrain-configuration-to-improve-driving-stability-for-unmanned-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2646.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">286</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1220</span> An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Denis%20A.%20Sokolov">Denis A. Sokolov</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Mazurkevich"> Andrey V. Mazurkevich</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=femtosecond%20laser" title="femtosecond laser">femtosecond laser</a>, <a href="https://publications.waset.org/abstracts/search?q=pulse%20correlation" title=" pulse correlation"> pulse correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=interferometer" title=" interferometer"> interferometer</a>, <a href="https://publications.waset.org/abstracts/search?q=laser%20absolute%20range%20finder" title=" laser absolute range finder"> laser absolute range finder</a>, <a href="https://publications.waset.org/abstracts/search?q=coordinate%20measurement" title=" coordinate measurement"> coordinate measurement</a> </p> <a href="https://publications.waset.org/abstracts/183368/an-absolute-femtosecond-rangefinder-for-metrological-support-in-coordinate-measurements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183368.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">59</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1219</span> Adaptive Online Object Tracking via Positive and Negative Models Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li">Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang"> Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20drift" title=" tracking drift"> tracking drift</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20squares%20analysis" title=" partial least squares analysis"> partial least squares analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20and%20negative%20models%20matching" title=" positive and negative models matching"> positive and negative models matching</a> </p> <a href="https://publications.waset.org/abstracts/19382/adaptive-online-object-tracking-via-positive-and-negative-models-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1218</span> 6D Posture Estimation of Road Vehicles from Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoshimoto%20Kurihara">Yoshimoto Kurihara</a>, <a href="https://publications.waset.org/abstracts/search?q=Tad%20Gonsalves"> Tad Gonsalves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=6D%20posture%20estimation" title="6D posture estimation">6D posture estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a> </p> <a href="https://publications.waset.org/abstracts/138449/6d-posture-estimation-of-road-vehicles-from-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1217</span> Object-Oriented Program Comprehension by Identification of Software Components and Their Connexions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelhak-Djamel%20Seriai">Abdelhak-Djamel Seriai</a>, <a href="https://publications.waset.org/abstracts/search?q=Selim%20Kebir"> Selim Kebir</a>, <a href="https://publications.waset.org/abstracts/search?q=Allaoua%20Chaoui"> Allaoua Chaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the last decades, object oriented program- ming has been massively used to build large-scale systems. However, evolution and maintenance of such systems become a laborious task because of the lack of object oriented programming to offer a precise view of the functional building blocks of the system. This lack is caused by the fine granularity of classes and objects. In this paper, we use a post object-oriented technology namely software components, to propose an approach based on the identification of the functional building blocks of an object oriented system by analyzing its source code. These functional blocks are specified as software components and the result is a multi-layer component based software architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=software%20comprehension" title="software comprehension">software comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20component" title=" software component"> software component</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20oriented" title=" object oriented"> object oriented</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20architecture" title=" software architecture"> software architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=reverse%20engineering" title=" reverse engineering"> reverse engineering</a> </p> <a href="https://publications.waset.org/abstracts/32119/object-oriented-program-comprehension-by-identification-of-software-components-and-their-connexions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32119.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1216</span> UAV Based Visual Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Dalmia">Vaibhav Dalmia</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Phirke"> Manoj Phirke</a>, <a href="https://publications.waset.org/abstracts/search?q=Renith%20G"> Renith G</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide adoption of UAVs (unmanned aerial vehicles) in various industries by the government as well as private corporations for solving computer vision tasks it’s necessary that their potential is analyzed completely. Recent advances in Deep Learning have also left us with a plethora of algorithms to solve different computer vision tasks. This study provides a comprehensive survey on solving the Visual Object Tracking problem and explains the tradeoffs involved in building a real-time yet reasonably accurate object tracking system for UAVs by looking at existing methods and evaluating them on the aerial datasets. Finally, the best trackers suitable for UAV-based applications are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=single%20object%20tracking" title=" single object tracking"> single object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20object%20tracking" title=" visual object tracking"> visual object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=UAVs" title=" UAVs"> UAVs</a> </p> <a href="https://publications.waset.org/abstracts/145331/uav-based-visual-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1215</span> Object-Oriented Modeling Simulation and Control of Activated Sludge Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Fernandez%20de%20Canete">J. Fernandez de Canete</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Del%20Saz%20Orozco"> P. Del Saz Orozco</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Garcia-Moral"> I. Garcia-Moral</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Akhrymenka"> A. Akhrymenka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object-oriented modeling is spreading in current simulation of wastewater treatments plants through the use of the individual components of the process and its relations to define the underlying dynamic equations. In this paper, we describe the use of the free-software OpenModelica simulation environment for the object-oriented modeling of an activated sludge process under feedback control. The performance of the controlled system was analyzed both under normal conditions and in the presence of disturbances. The object-oriented described approach represents a valuable tool in teaching provides a practical insight in wastewater process control field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object-oriented%20programming" title="object-oriented programming">object-oriented programming</a>, <a href="https://publications.waset.org/abstracts/search?q=activated%20sludge%20process" title=" activated sludge process"> activated sludge process</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenModelica" title=" OpenModelica"> OpenModelica</a>, <a href="https://publications.waset.org/abstracts/search?q=feedback%20control" title=" feedback control"> feedback control</a> </p> <a href="https://publications.waset.org/abstracts/47240/object-oriented-modeling-simulation-and-control-of-activated-sludge-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1214</span> Mosaic Augmentation: Insights and Limitations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Olivia%20A.%20Kjorlien">Olivia A. Kjorlien</a>, <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Asghari"> Maryam Asghari</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Alizadeh-Shabdiz"> Farshid Alizadeh-Shabdiz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this paper is to investigate the impact of mosaic augmentation on the performance of object detection solutions. To carry out the study, YOLOv4 and YOLOv4-Tiny models have been selected, which are popular, advanced object detection models. These models are also representatives of two classes of complex and simple models. The study also has been carried out on two categories of objects, simple and complex. For this study, YOLOv4 and YOLOv4 Tiny are trained with and without mosaic augmentation for two sets of objects. While mosaic augmentation improves the performance of simple object detection, it deteriorates the performance of complex object detection, specifically having the largest negative impact on the false positive rate in a complex object detection case. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy" title="accuracy">accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positives" title=" false positives"> false positives</a>, <a href="https://publications.waset.org/abstracts/search?q=mosaic%20augmentation" title=" mosaic augmentation"> mosaic augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4" title=" YOLOV4"> YOLOV4</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4-Tiny" title=" YOLOV4-Tiny"> YOLOV4-Tiny</a> </p> <a href="https://publications.waset.org/abstracts/162634/mosaic-augmentation-insights-and-limitations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162634.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1213</span> On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hidouri%20Sami">Hidouri Sami</a>, <a href="https://publications.waset.org/abstracts/search?q=Aguili%20Taoufik"> Aguili Taoufik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These two solutions are compared to results from the reference bibliography. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=method%20of%20auxiliary%20sources" title="method of auxiliary sources">method of auxiliary sources</a>, <a href="https://publications.waset.org/abstracts/search?q=scattering" title=" scattering"> scattering</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20object" title=" large object"> large object</a>, <a href="https://publications.waset.org/abstracts/search?q=RCS" title=" RCS"> RCS</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20resources" title=" computational resources"> computational resources</a> </p> <a href="https://publications.waset.org/abstracts/38516/on-the-study-of-the-electromagnetic-scattering-by-large-obstacle-based-on-the-method-of-auxiliary-sources" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1212</span> Vehicular Speed Detection Camera System Using Video Stream</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20A.%20Anser%20Pasha">C. A. Anser Pasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/45316/vehicular-speed-detection-camera-system-using-video-stream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45316.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1211</span> Development of 3D Laser Scanner for Robot Navigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Emre%20%C3%96zt%C3%BCrk">Ali Emre Öztürk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ergun%20Ercelebi"> Ergun Ercelebi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autonomous robotic systems needs an equipment like a human eye for their movement. Robotic camera systems, distance sensors and 3D laser scanners have been used in the literature. In this study a 3D laser scanner has been produced for those autonomous robotic systems. In general 3D laser scanners are using 2 dimension laser range finders that are moving on one-axis (1D) to generate the model. In this study, the model has been obtained by a one-dimensional laser range finder that is moving in two –axis (2D) and because of this the laser scanner has been produced cheaper. Furthermore for the laser scanner a motor driver, an embedded system control board has been used and at the same time a user interface card has been used to make the communication between those cards and computer. Due to this laser scanner, the density of the objects, the distance between the objects and the necessary path ways for the robot can be calculated. The data collected by the laser scanner system is converted in to cartesian coordinates to be modeled in AutoCAD program. This study shows also the synchronization between the computer user interface, AutoCAD and the embedded systems. As a result it makes the solution cheaper for such systems. The scanning results are enough for an autonomous robot but the scan cycle time should be developed. This study makes also contribution for further studies between the hardware and software needs since it has a powerful performance and a low cost. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20laser%20scanner" title="3D laser scanner">3D laser scanner</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20system" title=" embedded system"> embedded system</a>, <a href="https://publications.waset.org/abstracts/search?q=1D%20laser%20range%20finder" title=" 1D laser range finder"> 1D laser range finder</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20model" title=" 3D model"> 3D model</a> </p> <a href="https://publications.waset.org/abstracts/3355/development-of-3d-laser-scanner-for-robot-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3355.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1210</span> Global Based Histogram for 3D Object Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somar%20Boubou">Somar Boubou</a>, <a href="https://publications.waset.org/abstracts/search?q=Tatsuo%20Narikiyo"> Tatsuo Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Michihiro%20Kawanishi"> Michihiro Kawanishi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we address the problem of 3D object recognition with depth sensors such as Kinect or Structure sensor. Compared with traditional approaches based on local descriptors, which depends on local information around the object key points, we propose a global features based descriptor. Proposed descriptor, which we name as Differential Histogram of Normal Vectors (DHONV), is designed particularly to capture the surface geometric characteristics of the 3D objects represented by depth images. We describe the 3D surface of an object in each frame using a 2D spatial histogram capturing the normalized distribution of differential angles of the surface normal vectors. The object recognition experiments on the benchmark RGB-D object dataset and a self-collected dataset show that our proposed descriptor outperforms two others descriptors based on spin-images and histogram of normal vectors with linear-SVM classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision%20in%20control" title="vision in control">vision in control</a>, <a href="https://publications.waset.org/abstracts/search?q=robotics" title=" robotics"> robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20histogram%20of%20normal%20vectors" title=" differential histogram of normal vectors"> differential histogram of normal vectors</a> </p> <a href="https://publications.waset.org/abstracts/47486/global-based-histogram-for-3d-object-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1209</span> Deep Learning Application for Object Image Recognition and Robot Automatic Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiuh-Jer%20Huang">Shiuh-Jer Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Zon%20Yan"> Chen-Zon Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20K.%20Huang"> C. K. Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Chien%20Ting"> Chun-Chien Ting</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv2" title=" YOLOv2"> YOLOv2</a>, <a href="https://publications.waset.org/abstracts/search?q=7A6%20series%20manipulator" title=" 7A6 series manipulator"> 7A6 series manipulator</a> </p> <a href="https://publications.waset.org/abstracts/110468/deep-learning-application-for-object-image-recognition-and-robot-automatic-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110468.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1208</span> An Approach from Fichte as a Response to the Kantian Dualism of Subject and Object: The Unity of the Subject and Object in Both Theoretical and Ethical Possibility</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mengjie%20Liu">Mengjie Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This essay aims at responding to the Kant arguments on how to fit the self-caused subject into the deterministic object which follows the natural laws. This essay mainly adopts the approach abstracted from Fichte’s “Wissenshaftslehre” (Doctrine of Science) to picture a possible solution to the conciliation of Kantian dualism. The Fichte approach is based on the unity of the theoretical and practical reason, which can be understood as a philosophical abstraction from ordinary experience combining both subject and object. This essay will discuss the general Kantian dualism problem and Fichte’s unity approach in the first part. Then the essay will elaborate on the achievement of this unity of the subject and object through Fichte’s “the I posits itself” process in the second section. The following third section is related to the ethical unity of subject and object based on the Fichte approach. The essay will also discuss the limitation of Fichte’s approach from two perspectives: (1) the theoretical possibility of the existence of the pure I and (2) Schelling’s statement that the Absolute I is a result rather than the originating act. This essay demonstrates a possible approach to unifying the subject and object supported by Fichte’s “Absolute I” and ethical theories and also points out the limitations of Fichte’s theories. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fichte" title="Fichte">Fichte</a>, <a href="https://publications.waset.org/abstracts/search?q=identity" title=" identity"> identity</a>, <a href="https://publications.waset.org/abstracts/search?q=Kantian%20dualism" title=" Kantian dualism"> Kantian dualism</a>, <a href="https://publications.waset.org/abstracts/search?q=Wissenshaftslehre" title=" Wissenshaftslehre"> Wissenshaftslehre</a> </p> <a href="https://publications.waset.org/abstracts/150645/an-approach-from-fichte-as-a-response-to-the-kantian-dualism-of-subject-and-object-the-unity-of-the-subject-and-object-in-both-theoretical-and-ethical-possibility" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1207</span> Active Space Debris Removal by Extreme Ultraviolet Radiation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Anandha%20Selvan">A. Anandha Selvan</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Malarvizhi"> B. Malarvizhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent year the problem of space debris have become very serious. The mass of the artificial objects in orbit increased quite steadily at the rate of about 145 metric tons annually, leading to a total tally of approximately 7000 metric tons. Now most of space debris object orbiting in LEO region about 97%. The catastrophic collision can be mostly occurred in LEO region, where this collision generate the new debris. Thus, we propose a concept for cleaning the space debris in the region of thermosphere by passing the Extreme Ultraviolet (EUV) radiation to in front of space debris object from the re-orbiter. So in our concept the Extreme Ultraviolet (EUV) radiation will create the thermosphere expansion by reacting with atmospheric gas particles. So the drag is produced in front of the space debris object by thermosphere expansion. This drag force is high enough to slow down the space debris object’s relative velocity. Therefore the space debris object gradually reducing the altitude and finally enter into the earth’s atmosphere. After the first target is removed, the re-orbiter can be goes into next target. This method remove the space debris object without catching debris object. Thus it can be applied to a wide range of debris object without regard to their shapes or rotation. This paper discusses the operation of re-orbiter for removing the space debris in thermosphere region. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20space%20debris%20removal" title="active space debris removal">active space debris removal</a>, <a href="https://publications.waset.org/abstracts/search?q=space%20debris" title=" space debris"> space debris</a>, <a href="https://publications.waset.org/abstracts/search?q=LEO" title=" LEO"> LEO</a>, <a href="https://publications.waset.org/abstracts/search?q=extreme%20ultraviolet" title=" extreme ultraviolet"> extreme ultraviolet</a>, <a href="https://publications.waset.org/abstracts/search?q=re-orbiter" title=" re-orbiter"> re-orbiter</a>, <a href="https://publications.waset.org/abstracts/search?q=thermosphere" title=" thermosphere"> thermosphere</a> </p> <a href="https://publications.waset.org/abstracts/20478/active-space-debris-removal-by-extreme-ultraviolet-radiation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20478.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1206</span> Genetic Algorithm Based Deep Learning Parameters Tuning for Robot Object Recognition and Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Delowar%20Hossain">Delowar Hossain</a>, <a href="https://publications.waset.org/abstracts/search?q=Genci%20Capi"> Genci Capi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper concerns with the problem of deep learning parameters tuning using a genetic algorithm (GA) in order to improve the performance of deep learning (DL) method. We present a GA based DL method for robot object recognition and grasping. GA is used to optimize the DL parameters in learning procedure in term of the fitness function that is good enough. After finishing the evolution process, we receive the optimal number of DL parameters. To evaluate the performance of our method, we consider the object recognition and robot grasping tasks. Experimental results show that our method is efficient for robot object recognition and grasping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20grasping" title=" robot grasping"> robot grasping</a> </p> <a href="https://publications.waset.org/abstracts/67943/genetic-algorithm-based-deep-learning-parameters-tuning-for-robot-object-recognition-and-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1205</span> Urban Land Cover from GF-2 Satellite Images Using Object Based and Neural Network Classifications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lamyaa%20Gamal%20El-Deen%20Taha">Lamyaa Gamal El-Deen Taha</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashraf%20Sharawi"> Ashraf Sharawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> China launched satellite GF-2 in 2014. This study deals with comparing nearest neighbor object-based classification and neural network classification methods for classification of the fused GF-2 image. Firstly, rectification of GF-2 image was performed. Secondly, a comparison between nearest neighbor object-based classification and neural network classification for classification of fused GF-2 was performed. Thirdly, the overall accuracy of classification and kappa index were calculated. Results indicate that nearest neighbor object-based classification is better than neural network classification for urban mapping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GF-2%20images" title="GF-2 images">GF-2 images</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction-rectification" title=" feature extraction-rectification"> feature extraction-rectification</a>, <a href="https://publications.waset.org/abstracts/search?q=nearest%20neighbour%20object%20based%20classification" title=" nearest neighbour object based classification"> nearest neighbour object based classification</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20algorithms" title=" segmentation algorithms"> segmentation algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network%20classification" title=" neural network classification"> neural network classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multilayer%20perceptron" title=" multilayer perceptron"> multilayer perceptron</a> </p> <a href="https://publications.waset.org/abstracts/84243/urban-land-cover-from-gf-2-satellite-images-using-object-based-and-neural-network-classifications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1204</span> Software Defined Storage: Object Storage over Hadoop Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amritesh%20Srivastava">Amritesh Srivastava</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaurav%20Sharma"> Gaurav Sharma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this project is to develop an open source object storage system that is highly durable, scalable and reliable. There are two representative systems in cloud computing: Google and Amazon. Their storage systems for Google GFS and Amazon S3 provide high reliability, performance and stability. Our proposed system is highly inspired from Amazon S3. We are using Hadoop Distributed File System (HDFS) Java API to implement our system. We propose the architecture of object storage system based on Hadoop. We discuss the requirements of our system, what we expect from our system and what problems we may encounter. We also give detailed design proposal along with the abstract source code to implement it. The final goal of the system is to provide REST based access to our object storage system that exists on top of HDFS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hadoop" title="Hadoop">Hadoop</a>, <a href="https://publications.waset.org/abstracts/search?q=HBase" title=" HBase"> HBase</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20storage" title=" object storage"> object storage</a>, <a href="https://publications.waset.org/abstracts/search?q=REST" title=" REST"> REST</a> </p> <a href="https://publications.waset.org/abstracts/54130/software-defined-storage-object-storage-over-hadoop-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54130.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1203</span> Object-Oriented Programming for Modeling and Simulation of Systems in Physiology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Fernandez%20de%20Canete">J. Fernandez de Canete</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object-oriented modeling is spreading in the current simulation of physiological systems through the use of the individual components of the model and its interconnections to define the underlying dynamic equations. In this paper, we describe the use of both the SIMSCAPE and MODELICA simulation environments in the object-oriented modeling of the closed-loop cardiovascular system. The performance of the controlled system was analyzed by simulation in light of the existing hypothesis and validation tests previously performed with physiological data. The described approach represents a valuable tool in the teaching of physiology for graduate medical students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object-oriented%20modeling" title="object-oriented modeling">object-oriented modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=SIMSCAPE%20simulation%20language" title=" SIMSCAPE simulation language"> SIMSCAPE simulation language</a>, <a href="https://publications.waset.org/abstracts/search?q=MODELICA%20simulation%20language" title=" MODELICA simulation language"> MODELICA simulation language</a>, <a href="https://publications.waset.org/abstracts/search?q=cardiovascular%20system" title=" cardiovascular system"> cardiovascular system</a> </p> <a href="https://publications.waset.org/abstracts/28645/object-oriented-programming-for-modeling-and-simulation-of-systems-in-physiology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1202</span> Theoretical Approaches to Graphic and Formal Generation from Evolutionary Genetics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Luz%20Estrada">Luz Estrada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The currents of evolutionary materialistic thought have argued that knowledge about an object is not obtained through the abstractive method. That is, the object cannot come to be understood if founded upon itself, nor does it take place by the encounter between form and matter. According to this affirmation, the research presented here identified as a problematic situation the absence of comprehension of the formal creation as a generative operation. This has been referred to as a recurrent lack in the production of objects and corresponds to the need to conceive the configurative process from the reality of its genesis. In this case, it is of interest to explore ways of creation that consider the object as if it were a living organism, as well as responding to the object’s experience as embodied in the designer since it unfolds its genesis simultaneously to the ways of existence of those who are involved in the generative experience. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=architecture" title="architecture">architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=theoretical%20graphics" title=" theoretical graphics"> theoretical graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=evolutionary%20genetics" title=" evolutionary genetics"> evolutionary genetics</a>, <a href="https://publications.waset.org/abstracts/search?q=formal%20perception" title=" formal perception"> formal perception</a> </p> <a href="https://publications.waset.org/abstracts/158586/theoretical-approaches-to-graphic-and-formal-generation-from-evolutionary-genetics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158586.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=41">41</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=42">42</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object%20finder&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10