CINXE.COM

Search results for: shadow detection

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: shadow detection</title> <meta name="description" content="Search results for: shadow detection"> <meta name="keywords" content="shadow detection"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="shadow detection" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="shadow detection"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3548</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: shadow detection</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3548</span> A Novel Spectral Index for Automatic Shadow Detection in Urban Mapping Based on WorldView-2 Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaveh%20Shahi">Kaveh Shahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Helmi%20Z.%20M.%20Shafri"> Helmi Z. M. Shafri</a>, <a href="https://publications.waset.org/abstracts/search?q=Ebrahim%20Taherzadeh"> Ebrahim Taherzadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In remote sensing, shadow causes problems in many applications such as change detection and classification. It is caused by objects which are elevated, thus can directly affect the accuracy of information. For these reasons, it is very important to detect shadows particularly in urban high spatial resolution imagery which created a significant problem. This paper focuses on automatic shadow detection based on a new spectral index for multispectral imagery known as Shadow Detection Index (SDI). The new spectral index was tested on different areas of World-View 2 images and the results demonstrated that the new spectral index has a massive potential to extract shadows effectively and automatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spectral%20index" title="spectral index">spectral index</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20detection" title=" shadow detection"> shadow detection</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing%20images" title=" remote sensing images"> remote sensing images</a>, <a href="https://publications.waset.org/abstracts/search?q=World-View%202" title=" World-View 2"> World-View 2</a> </p> <a href="https://publications.waset.org/abstracts/13500/a-novel-spectral-index-for-automatic-shadow-detection-in-urban-mapping-based-on-worldview-2-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13500.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3547</span> A Fast Silhouette Detection Algorithm for Shadow Volumes in Augmented Reality</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hoshang%20Kolivand">Hoshang Kolivand</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahyar%20Kolivand"> Mahyar Kolivand</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Shahrizal%20Sunar"> Mohd Shahrizal Sunar</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Azhar%20M.%20Arsad"> Mohd Azhar M. Arsad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Real-time shadow generation in virtual environments and Augmented Reality (AR) was always a hot topic in the last three decades. Lots of calculation for shadow generation among AR needs a fast algorithm to overcome this issue and to be capable of implementing in any real-time rendering. In this paper, a silhouette detection algorithm is presented to generate shadows for AR systems. &Delta;+ algorithm is presented based on extending edges of occluders to recognize which edges are silhouettes in the case of real-time rendering. An accurate comparison between the proposed algorithm and current algorithms in silhouette detection is done to show the reduction calculation by presented algorithm. The algorithm is tested in both virtual environments and AR systems. We think that this algorithm has the potential to be a fundamental algorithm for shadow generation in all complex environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=silhouette%20detection" title="silhouette detection">silhouette detection</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20volumes" title=" shadow volumes"> shadow volumes</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20shadows" title=" real-time shadows"> real-time shadows</a>, <a href="https://publications.waset.org/abstracts/search?q=rendering" title=" rendering"> rendering</a>, <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality" title=" augmented reality"> augmented reality</a> </p> <a href="https://publications.waset.org/abstracts/46127/a-fast-silhouette-detection-algorithm-for-shadow-volumes-in-augmented-reality" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46127.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">443</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3546</span> Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Waqqas-ur-Rehman%20Butt">Waqqas-ur-Rehman Butt</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Servin"> Martin Servin</a>, <a href="https://publications.waset.org/abstracts/search?q=Marion%20Pause"> Marion Pause</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=illumination%20equalization" title=" illumination equalization"> illumination equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20filtering" title=" shadow filtering"> shadow filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a> </p> <a href="https://publications.waset.org/abstracts/77157/object-detection-in-digital-images-under-non-standardized-conditions-using-illumination-and-shadow-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77157.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">216</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3545</span> Interactive Shadow Play Animation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo%20Wan">Bo Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiu%20Wen"> Xiu Wen</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingling%20An"> Lingling An</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoling%20Ding"> Xiaoling Ding</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hadow%20play%20animation" title="hadow play animation">hadow play animation</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect" title=" Kinect"> Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=VRPN" title=" VRPN"> VRPN</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a> </p> <a href="https://publications.waset.org/abstracts/19293/interactive-shadow-play-animation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3544</span> Upcoming Fight Simulation with Smart Shadow</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramiz%20Kuliev">Ramiz Kuliev</a>, <a href="https://publications.waset.org/abstracts/search?q=Fuad%20Kuliev-Smirnov"> Fuad Kuliev-Smirnov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 'Shadow Sparring' training exercise is widely used in the training of boxers and martial artists. The main disadvantage of the usual shadow sparring is that the trainer cannot fully control such training and evaluate its results. During the competition, the athlete, preparing for the upcoming fight, imagines the Shadow (upcoming opponent) in accordance with his own imagination. A ‘Smart-Shadow Sparring’ (SSS) is an innovative version of the ‘Shadow Sparring’. During SSS, the fighter will see the Shadow (virtual opponent that moves, defends, and punches) and understand when he misses the punches from the Shadow. The task of a real athlete is to spar with a virtual one, move around, punch in the direction of unprotected areas of the Shadow and dodge his punches. Moves and punches of Shadow are set up before each training. The system will give the coach full information about virtual sparring: (i) how many and what type of punches has the fighter landed, (ii) accuracy of these punches, (iii) how many and what type of virtual punches (punches of Smart-Shadow) has the fighter missed, etc. SSS will be recorded as animated fighting of two fighters and will help the coach to analyze past training. SSS can be configured to fit the physical and technical characteristics of the next real opponent (size, techniques, speed, missed and landed punches, etc.). This will allow to simulate and rehearse the upcoming fight and improve readiness for the next opponent. For amateur fighters, SSS will be reconfigured several times during a tournament, when the real opponent becomes known. SSS can be used in three versions: (1) Digital Shadow: the athlete will see a Shadow on a monitor (2) VR-Shadow: the athlete will see a Shadow in a VR-glasses (3) Smart Shadow: a Shadow will be controlled by artificial intelligence. These technologies are based on the ‘semi-real simulation’ method. The technology allows coaches to train athletes remotely. Simulation of different opponents will help the athletes better prepare for competition. Repeat rehearsals of the upcoming fight will help improve results. SSS can improve results in Boxing, Taekwondo, Karate, and Fencing. 41 sets of medals will be awarded in these sports at the 2020 Olympic Games. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=boxing" title="boxing">boxing</a>, <a href="https://publications.waset.org/abstracts/search?q=combat%20sports" title=" combat sports"> combat sports</a>, <a href="https://publications.waset.org/abstracts/search?q=fight%20simulation" title=" fight simulation"> fight simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20sparring" title=" shadow sparring"> shadow sparring</a> </p> <a href="https://publications.waset.org/abstracts/130606/upcoming-fight-simulation-with-smart-shadow" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3543</span> Toward the Understanding of Shadow Port&#039;s Growth: The Level of Shadow Port</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chayakarn%20Bamrungbutr">Chayakarn Bamrungbutr</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Sillitoe"> James Sillitoe </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The term ‘shadow port’ is used to describe a port whose markets are dominated by an adjacent port that has a more competitive capability. Recently, researchers have put effort into studying the mechanisms of how a regional port, in the shadow of a nearby predominant port which is a capital city port, can compete and grow. However, such mechanism is still unclear. This study thus focuses on understanding the growth of shadow port and the type of shadow port by using the two capital city ports of Thailand; Bangkok port (the former main port) and Laem Chabang port (the current main port), as the case study. By developing an understanding of the mechanisms of shadow, port could ultimately lead to an increase in the competitiveness. In this study, a framework of opportunity capture (introduced by Magala, 2004) will be used to create a framework for the study of the growth of the selected shadow port. In the process of building this framework, five groups of port development experts, consisting of government, council, academia, logistics provider and industry, will be interviewed. To facilitate this work, the Noticing, Collecting and Thinking model which was developed by Seidel (1998) will be used in an analysis of the dataset. The resulting analysis will be used to classify the type of shadow port. The type of these ports will be a significant factor for developing a feasible strategic guideline for the future management planning of ports, particularly, shadow ports, and then to increase the competitiveness of a nation’s maritime transport industry, and eventually lead to a boost in the national economy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=shadow%20port" title="shadow port">shadow port</a>, <a href="https://publications.waset.org/abstracts/search?q=Bangkok%20Port" title=" Bangkok Port"> Bangkok Port</a>, <a href="https://publications.waset.org/abstracts/search?q=Laem%20Chabang%20Port" title=" Laem Chabang Port"> Laem Chabang Port</a>, <a href="https://publications.waset.org/abstracts/search?q=port%20growth" title=" port growth"> port growth</a> </p> <a href="https://publications.waset.org/abstracts/85068/toward-the-understanding-of-shadow-ports-growth-the-level-of-shadow-port" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85068.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3542</span> Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evans%20Belly">Evans Belly</a>, <a href="https://publications.waset.org/abstracts/search?q=Imdad%20Rizvi"> Imdad Rizvi</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20M.%20Kadam"> M. M. Kadam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building%20detection" title="building detection">building detection</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20detection" title=" shadow detection"> shadow detection</a>, <a href="https://publications.waset.org/abstracts/search?q=landscape%20generation" title=" landscape generation"> landscape generation</a>, <a href="https://publications.waset.org/abstracts/search?q=label" title=" label"> label</a>, <a href="https://publications.waset.org/abstracts/search?q=partitioning" title=" partitioning"> partitioning</a>, <a href="https://publications.waset.org/abstracts/search?q=very%20high%20resolution%20%28VHR%29%20satellite%20imagery" title=" very high resolution (VHR) satellite imagery"> very high resolution (VHR) satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/76690/automatic-extraction-of-arbitrarily-shaped-buildings-from-vhr-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76690.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">314</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3541</span> In Search of Bauman’s Moral Impulse in Shadow Factories of China </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Akram%20Hatami">Akram Hatami</a>, <a href="https://publications.waset.org/abstracts/search?q=Naser%20Firoozi"> Naser Firoozi</a>, <a href="https://publications.waset.org/abstracts/search?q=Vesa%20Puhakka"> Vesa Puhakka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ethics and responsibility are rapidly becoming a distinguishing feature of organizations. In this paper, we analyze ethics and responsibility in shadow factories in China. We engage ourselves with Bauman&rsquo;s moral impulse perspective because his idea can contextualize ethics and responsibility. Moral impulse is a feeling of a selfless, infinite and unconditional responsibility towards, and care for, Others. We analyze a case study from a secondary data source because, for such a critical phenomenon as business ethics in shadow factories, collecting primary data is difficult, since they are unregistered factories. We argue that there has not been enough attention given to the ethics and responsibility in shadow factories in China. Our main goal is to demonstrate that, considering the Other, more importantly the employees, in ethical decision-making is a simple instruction beyond the narrow version of ethics by ethical codes and rules. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=moral%20impulse" title="moral impulse">moral impulse</a>, <a href="https://publications.waset.org/abstracts/search?q=responsibility" title=" responsibility"> responsibility</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20factories" title=" shadow factories"> shadow factories</a>, <a href="https://publications.waset.org/abstracts/search?q=Bauman%E2%80%99s%20moral%20impulse" title=" Bauman’s moral impulse"> Bauman’s moral impulse</a> </p> <a href="https://publications.waset.org/abstracts/84339/in-search-of-baumans-moral-impulse-in-shadow-factories-of-china" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84339.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3540</span> Estimating Directional Shadow Prices of Air Pollutant Emissions by Transportation Modes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huey-Kuo%20Chen">Huey-Kuo Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper applies directional marginal productivity model to study the shadow price of emissions by transportation modes in the years of 2011 and 2013 with the aim to provide a reference for policy makers to improve the emission of pollutants. One input variable (i.e., energy consumption), one desirable output variable (i.e., vehicle kilometers traveled) and three undesirable output variables (i.e., carbon dioxide, sulfur oxides and nitrogen oxides) generated by road transportation modes were used to evaluate directional marginal productivity and directional shadow price for 18 transportation modes. The results show that the directional shadow price (DSP) of SOx is much higher than CO2 and NOx. Nevertheless, the emission of CO2 is the largest among the three kinds of pollutants. To improve the air quality, the government should pay more attention to the emission of CO2 and apply the alternative solution such as promoting public transportation and subsidizing electric vehicles to reduce the use of private vehicles. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=marginal%20productivity" title="marginal productivity">marginal productivity</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20transportation%20modes" title=" road transportation modes"> road transportation modes</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20price" title=" shadow price"> shadow price</a>, <a href="https://publications.waset.org/abstracts/search?q=undesirable%20outputs" title=" undesirable outputs"> undesirable outputs</a> </p> <a href="https://publications.waset.org/abstracts/123589/estimating-directional-shadow-prices-of-air-pollutant-emissions-by-transportation-modes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123589.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3539</span> A Case Study of Zhang Yimou, Using Color Evidence From “Hero and the Shadow” and How the Color Is Symbolized in Contemporary Society?</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rakiba%20Sultana">Rakiba Sultana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates how different colors are used and bring symbolic meaning comparatively in Zhang Yimou's movies Hero and Shadow. The study also explores how those colors are symbolized in contemporary society. The researcher analyzes the movies Hero and the Shadow to investigate them using colors and how they are used in contemporary society. Hero exposes the colorful colors to expose the Chinese traditions, whereas Shadow explores the gray, black, and white with the ink paints. Also, in contemporary society, sometimes, the author gets a similar symbolic meaning of the colors. Sometimes, the contemporary's meaning is different from the one used in these two movies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chinese%20movie" title="Chinese movie">Chinese movie</a>, <a href="https://publications.waset.org/abstracts/search?q=visuals" title=" visuals"> visuals</a>, <a href="https://publications.waset.org/abstracts/search?q=colors" title=" colors"> colors</a>, <a href="https://publications.waset.org/abstracts/search?q=traditional%20painting" title=" traditional painting"> traditional painting</a>, <a href="https://publications.waset.org/abstracts/search?q=contemporary%20society" title=" contemporary society"> contemporary society</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20Western%20countries" title=" and Western countries"> and Western countries</a> </p> <a href="https://publications.waset.org/abstracts/153113/a-case-study-of-zhang-yimou-using-color-evidence-from-hero-and-the-shadow-and-how-the-color-is-symbolized-in-contemporary-society" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153113.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3538</span> Developing a Shadow Port: A Case Study of Bangkok Port and Laem Chabang Port, Thailand</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Bamrungbutr">C. Bamrungbutr</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Sillitoe"> J. Sillitoe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Maritime transportation has been a crucial part of world economics. Recently, researchers have put effort into studying the mechanisms of how a regional port, in the shadow of a nearby predominant port, can compete and grow. However, limited research has focused on the competition issues for a shadow port which is a capital city port. This study will thus focus on this question of the growth of a capital city port which is under the shadow of the adjacent capital city port by using the two capital city ports of Thailand; Bangkok port (the former main port) and Laem Chabang port (the current main port). For this work, a framework of opportunity capture will be used, and five groups of port development experts (government, council, logistics provider, academia and industry) will be interviewed. The responses will be analysed using the noticing, collecting and thinking model. The resulting analysis will be appropriate for use in developing guidelines for the future management of a range of shadow ports established in a capital city, enabling them to operate in a competitive environment more effectively. The resultant growth of these ports will be a significant factor in increasing the competitiveness of a nation’s maritime transport industry and eventually lead to a boost in the national economy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=shadow%20port" title="shadow port">shadow port</a>, <a href="https://publications.waset.org/abstracts/search?q=Bangkok%20Port" title=" Bangkok Port"> Bangkok Port</a>, <a href="https://publications.waset.org/abstracts/search?q=Laem%20Chabang%20Port" title=" Laem Chabang Port"> Laem Chabang Port</a>, <a href="https://publications.waset.org/abstracts/search?q=port%20competition" title=" port competition"> port competition</a> </p> <a href="https://publications.waset.org/abstracts/85066/developing-a-shadow-port-a-case-study-of-bangkok-port-and-laem-chabang-port-thailand" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">173</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3537</span> Investigation of the Factors Influencing the Construction Planning Process Using Participant Observation Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashokkumar%20Subbiah">Ashokkumar Subbiah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the impact of factors that influenced the success of construction planning for a major construction project in Qatar. An approach of participant observation is adopted which is informed by the principles of ethnography: one that reports the participants’ view of their world rather than imposing an artificial theoretical framework upon it. As participant observant, key factors were observed and identified that had an impact on the management and execution of the construction planning. It is found that a ‘shadow culture’ exists between the project participants which, it is argued, is only observable from the perspective of an embedded participant observer. The shadow culture acts to enable the management of the planning process, and its efficacy relates to the ‘quality’ of human inter-relationships amongst immediate stakeholders. Whilst this study uses the concept of shadow culture, it is treated as both a methodological stance and one of the findings of this research in the context of the major construction project in Qatar. The concept of shadow culture is not imposed upon the findings, but instead is used as a research tool: respondents report their own worldview and this is reported from the view of a participant observant in a manner that is understandable and useful to those who are not part of the construction project. The findings of this study identify similar factors influencing the planning process of the Qatar project, but the shadow culture predominantly influences these factors towards the failure of planning process. The research concludes by questioning the assumption that construction planning is a mechanistic process that has to be conducted solely by the planning team. Instead, it is a highly social phenomenon in which the seemingly mechanistic process is made workable by the quality of relationships that exist in the project. Drawing on this the final section provides a series of recommendations that may be helpful in enhancing the efficacy of project planning; these include better training/education at the pre-construction phase; recognition of the importance of shadow processes at management levels, and better appreciation of the impact of contract type and chosen procurement route. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=construction%20planning" title="construction planning">construction planning</a>, <a href="https://publications.waset.org/abstracts/search?q=participant%20observation" title=" participant observation"> participant observation</a>, <a href="https://publications.waset.org/abstracts/search?q=project%20participants" title=" project participants"> project participants</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20culture" title=" shadow culture"> shadow culture</a> </p> <a href="https://publications.waset.org/abstracts/87078/investigation-of-the-factors-influencing-the-construction-planning-process-using-participant-observation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87078.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">298</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3536</span> Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Derlis%20Gregor">Derlis Gregor</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20Cikel"> Kevin Cikel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mario%20Arzamendia"> Mario Arzamendia</a>, <a href="https://publications.waset.org/abstracts/search?q=Ra%C3%BAl%20Gregor"> Raúl Gregor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intelligent%20transportation%20system" title="intelligent transportation system">intelligent transportation system</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20couting" title=" vehicle couting"> vehicle couting</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20classification" title=" vehicle classification"> vehicle classification</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/43870/design-and-implementation-of-a-counting-and-differentiation-system-for-vehicles-through-video-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43870.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3535</span> Valuing Social Sustainability in Agriculture: An Approach Based on Social Outputs’ Shadow Prices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amer%20Ait%20Sidhoum">Amer Ait Sidhoum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Interest in sustainability has gained ground among practitioners, academics and policy-makers due to growing stakeholders’ awareness of environmental and social concerns. This is particularly true for agriculture. However, relatively little research has been conducted on the quantification of social sustainability and the contribution of social issues to the agricultural production efficiency. This research's main objective is to propose a method for evaluating prices of social outputs, more precisely shadow prices, by allowing for the stochastic nature of agricultural production that is to say for production uncertainty. In this article, the assessment of social outputs’ shadow prices is conducted within the methodological framework of nonparametric Data Envelopment Analysis (DEA). An output-oriented directional distance function (DDF) is implemented to represent the technology of a sample of Catalan arable crop farms and derive the efficiency scores the overall production technology of our sample is assumed to be the intersection of two different sub-technologies. The first sub-technology models the production of random desirable agricultural outputs, while the second sub-technology reflects the social outcomes from agricultural activities. Once a nonparametric production technology has been represented, the DDF primal approach can be used for efficiency measurement, while shadow prices are drawn from the dual representation of the DDF. Computing shadow prices is a method to assign an economic value to non-marketed social outcomes. Our research uses cross sectional, farm-level data collected in 2015 from a sample of 180 Catalan arable crop farms specialized in the production of cereals, oilseeds and protein (COP) crops. Our results suggest that our sample farms show high performance scores, from 85% for the bad state of nature to 88% for the normal and ideal crop growing conditions. This suggests that farm performance is increasing with an improvement in crop growth conditions. Results also show that average shadow prices of desirable state-contingent output and social outcomes for efficient and inefficient farms are positive, suggesting that the production of desirable marketable outputs and of non-marketable outputs makes a positive contribution to the farm production efficiency. Results also indicate that social outputs’ shadow prices are contingent upon the growing conditions. The shadow prices follow an upward trend as crop-growing conditions improve. This finding suggests that these efficient farms prefer to allocate more resources in the production of desirable outputs than of social outcomes. To our knowledge, this study represents the first attempt to compute shadow prices of social outcomes while accounting for the stochastic nature of the production technology. Our findings suggest that the decision-making process of the efficient farms in dealing with social issues are stochastic and strongly dependent on the growth conditions. This implies that policy-makers should adjust their instruments according to the stochastic environmental conditions. An optimal redistribution of rural development support, by increasing the public payment with the improvement in crop growth conditions, would likely enhance the effectiveness of public policies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20envelopment%20analysis" title="data envelopment analysis">data envelopment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20prices" title=" shadow prices"> shadow prices</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20sustainability" title=" social sustainability"> social sustainability</a>, <a href="https://publications.waset.org/abstracts/search?q=sustainable%20farming" title=" sustainable farming"> sustainable farming</a> </p> <a href="https://publications.waset.org/abstracts/96317/valuing-social-sustainability-in-agriculture-an-approach-based-on-social-outputs-shadow-prices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96317.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3534</span> A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mulugeta%20K.%20Tefera">Mulugeta K. Tefera</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaolong%20Yang"> Xiaolong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Liu"> Jian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20modeling" title="background modeling">background modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-frame%20difference" title=" inter-frame difference"> inter-frame difference</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20and%20tracking" title=" object detection and tracking"> object detection and tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/78578/a-real-time-moving-object-detection-and-tracking-scheme-and-its-implementation-for-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3533</span> Immuno-field Effect Transistor Using Carbon Nanotubes Network – Based for Human Serum Albumin Highly Sensitive Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhamad%20Azuddin%20Hassan">Muhamad Azuddin Hassan</a>, <a href="https://publications.waset.org/abstracts/search?q=Siti%20Shafura%20Karim"> Siti Shafura Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ambri%20Mohamed"> Ambri Mohamed</a>, <a href="https://publications.waset.org/abstracts/search?q=Iskandar%20Yahya"> Iskandar Yahya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human serum albumin plays a significant part in the physiological functions of the human body system (HSA).HSA level monitoring is critical for early detection of HSA-related illnesses. The goal of this study is to show that a field effect transistor (FET)-based immunosensor can assess HSA using high aspect ratio carbon nanotubes network (CNT) as a transducer. The CNT network were deposited using air brush technique, and the FET device was made using a shadow mask process. Field emission scanning electron microscopy and a current-voltage measurement system were used to examine the morphology and electrical properties of the CNT network, respectively. X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy were used to confirm the surface alteration of the CNT. The detection process is based on covalent binding interactions between an antibody and an HSA target, which resulted in a change in the manufactured biosensor's drain current (Id).In a linear range between 1 ng/ml and 10zg/ml, the biosensor has a high sensitivity of 0.826 mA (g/ml)-1 and a LOD value of 1.9zg/ml.HSA was also identified in a genuine serum despite interference from other biomolecules, demonstrating the CNT-FET immunosensor's ability to quantify HSA in a complex biological environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=carbon%20nanotubes%20network" title="carbon nanotubes network">carbon nanotubes network</a>, <a href="https://publications.waset.org/abstracts/search?q=biosensor" title=" biosensor"> biosensor</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20serum%20albumin" title=" human serum albumin"> human serum albumin</a> </p> <a href="https://publications.waset.org/abstracts/145494/immuno-field-effect-transistor-using-carbon-nanotubes-network-based-for-human-serum-albumin-highly-sensitive-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3532</span> Efficient Signal Detection Using QRD-M Based on Channel Condition in MIMO-OFDM System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Jeong%20Kim">Jae-Jeong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ki-Ro%20Kim"> Ki-Ro Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an efficient signal detector that switches M parameter of QRD-M detection scheme is proposed for MIMO-OFDM system. The proposed detection scheme calculates the threshold by 1-norm condition number and then switches M parameter of QRD-M detection scheme according to channel information. If channel condition is bad, the parameter M is set to high value to increase the accuracy of detection. If channel condition is good, the parameter M is set to low value to reduce complexity of detection. Therefore, the proposed detection scheme has better trade off between BER performance and complexity than the conventional detection scheme. The simulation result shows that the complexity of proposed detection scheme is lower than QRD-M detection scheme with similar BER performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title="MIMO-OFDM">MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=QRD-M" title=" QRD-M"> QRD-M</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20condition" title=" channel condition"> channel condition</a>, <a href="https://publications.waset.org/abstracts/search?q=BER" title=" BER"> BER</a> </p> <a href="https://publications.waset.org/abstracts/3518/efficient-signal-detection-using-qrd-m-based-on-channel-condition-in-mimo-ofdm-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3531</span> Reduced Complexity of ML Detection Combined with DFE</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Hyun%20Ro">Jae-Hyun Ro</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Jun%20Kim"> Yong-Jun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang-Bin%20Ha"> Chang-Bin Ha</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, many detection schemes have been developed to improve the error performance and to reduce the complexity. Maximum likelihood (ML) detection has optimal error performance but it has very high complexity. Thus, this paper proposes reduced complexity of ML detection combined with decision feedback equalizer (DFE). The error performance of the proposed detection scheme is higher than the conventional DFE. But the complexity of the proposed scheme is lower than the conventional ML detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detection" title="detection">detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DFE" title=" DFE"> DFE</a>, <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title=" MIMO-OFDM"> MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a> </p> <a href="https://publications.waset.org/abstracts/42215/reduced-complexity-of-ml-detection-combined-with-dfe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42215.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3530</span> Characteristics of Different Solar PV Modules under Partial Shading</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hla%20Hla%20Khaing">Hla Hla Khaing</a>, <a href="https://publications.waset.org/abstracts/search?q=Yit%20Jian%20Liang"> Yit Jian Liang</a>, <a href="https://publications.waset.org/abstracts/search?q=Nant%20Nyein%20Moe%20Htay"> Nant Nyein Moe Htay</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiang%20Fan"> Jiang Fan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Partial shadowing is one of the problems that are always faced in terrestrial applications of solar photovoltaic (PV). The effects of partial shadow on the energy yield of conventional mono-crystalline and multi-crystalline PV modules have been researched for a long time. With deployment of new thin-film solar PV modules in the market, it is important to understand the performance of new PV modules operating under the partial shadow in the tropical zone. This paper addresses the impacts of different partial shadowing on the operating characteristics of four different types of solar PV modules that include multi-crystalline, amorphous thin-film, CdTe thin-film and CIGS thin-film PV modules. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=partial%20shade" title="partial shade">partial shade</a>, <a href="https://publications.waset.org/abstracts/search?q=CdTe" title=" CdTe"> CdTe</a>, <a href="https://publications.waset.org/abstracts/search?q=CIGS" title=" CIGS"> CIGS</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-crystalline%20%28mc-Si%29" title=" multi-crystalline (mc-Si)"> multi-crystalline (mc-Si)</a>, <a href="https://publications.waset.org/abstracts/search?q=amorphous%20silicon%20%28a-Si%29" title=" amorphous silicon (a-Si)"> amorphous silicon (a-Si)</a>, <a href="https://publications.waset.org/abstracts/search?q=bypass%20diode" title=" bypass diode"> bypass diode</a> </p> <a href="https://publications.waset.org/abstracts/9357/characteristics-of-different-solar-pv-modules-under-partial-shading" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9357.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">450</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3529</span> Cigarette Smoke Detection Based on YOLOV3</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Li">Wei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuo%20Yang"> Tuo Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=cigarette%20smoke%20detection" title=" cigarette smoke detection"> cigarette smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction" title=" color feature extraction"> color feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/159151/cigarette-smoke-detection-based-on-yolov3" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3528</span> An Architecture for New Generation of Distributed Intrusion Detection System Based on Preventive Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Benmoussa">H. Benmoussa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20A.%20El%20Kalam"> A. A. El Kalam</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Ait%20Ouahman"> A. Ait Ouahman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The design and implementation of intrusion detection systems (IDS) remain an important area of research in the security of information systems. Despite the importance and reputation of the current intrusion detection systems, their efficiency and effectiveness remain limited as they should include active defense approach to allow anticipating and predicting intrusions before their occurrence. Consequently, they must be readapted. For this purpose we suggest a new generation of distributed intrusion detection system based on preventive detection approach and using intelligent and mobile agents. Our architecture benefits from mobile agent features and addresses some of the issues with centralized and hierarchical models. Also, it presents advantages in terms of increasing scalability and flexibility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Intrusion%20Detection%20System%20%28IDS%29" title="Intrusion Detection System (IDS)">Intrusion Detection System (IDS)</a>, <a href="https://publications.waset.org/abstracts/search?q=preventive%20detection" title=" preventive detection"> preventive detection</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agents" title=" mobile agents"> mobile agents</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20architecture" title=" distributed architecture"> distributed architecture</a> </p> <a href="https://publications.waset.org/abstracts/18239/an-architecture-for-new-generation-of-distributed-intrusion-detection-system-based-on-preventive-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18239.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">583</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3527</span> Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omair%20Ghori">Omair Ghori</a>, <a href="https://publications.waset.org/abstracts/search?q=Anton%20Stadler"> Anton Stadler</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Wilk"> Stefan Wilk</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20Effelsberg"> Wolfgang Effelsberg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20analysis" title="contrast analysis">contrast analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20fire%20detection" title=" early fire detection"> early fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20smoke%20detection" title=" video smoke detection"> video smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/52006/video-based-ambient-smoke-detection-by-detecting-directional-contrast-decrease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3526</span> Intrusion Detection Techniques in NaaS in the Cloud: A Review </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashid%20Mahmood">Rashid Mahmood</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=IDS" title="IDS">IDS</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud" title=" cloud"> cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=naas" title=" naas"> naas</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a> </p> <a href="https://publications.waset.org/abstracts/36475/intrusion-detection-techniques-in-naas-in-the-cloud-a-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3525</span> Multichannel Object Detection with Event Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Iliasov">Rafael Iliasov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Golkar"> Alessandro Golkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection based on event vision has been a dynamically growing field in computer vision for the last 16 years. In this work, we create multiple channels from a single event camera and propose an event fusion method (EFM) to enhance object detection in event-based vision systems. Each channel uses a different accumulation buffer to collect events from the event camera. We implement YOLOv7 for object detection, followed by a fusion algorithm. Our multichannel approach outperforms single-channel-based object detection by 0.7% in mean Average Precision (mAP) for detection overlapping ground truth with IOU = 0.5. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20camera" title="event camera">event camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20with%20multimodal%20inputs" title=" object detection with multimodal inputs"> object detection with multimodal inputs</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20fusion" title=" multichannel fusion"> multichannel fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/190247/multichannel-object-detection-with-event-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3524</span> Securing Web Servers by the Intrusion Detection System (IDS)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousef%20Farhaoui">Yousef Farhaoui </a> </p> <p class="card-text"><strong>Abstract:</strong></p> An IDS is a tool which is used to improve the level of security. We present in this paper different architectures of IDS. We will also discuss measures that define the effectiveness of IDS and the very recent works of standardization and homogenization of IDS. At the end, we propose a new model of IDS called BiIDS (IDS Based on the two principles of detection) for securing web servers and applications by the Intrusion Detection System (IDS). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intrusion%20detection" title="intrusion detection">intrusion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architectures" title=" architectures"> architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=characteristic" title=" characteristic"> characteristic</a>, <a href="https://publications.waset.org/abstracts/search?q=tools" title=" tools"> tools</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20server" title=" web server"> web server</a> </p> <a href="https://publications.waset.org/abstracts/13346/securing-web-servers-by-the-intrusion-detection-system-ids" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3523</span> Suggestion for Malware Detection Agent Considering Network Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ji-Hoon%20Hong">Ji-Hoon Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong-Hee%20Kim"> Dong-Hee Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nam-Uk%20Kim"> Nam-Uk Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Tai-Myoung%20Chung"> Tai-Myoung Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Smartphone users are increasing rapidly. Accordingly, many companies are running BYOD (Bring Your Own Device: Policies to bring private-smartphones to the company) policy to increase work efficiency. However, smartphones are always under the threat of malware, thus the company network that is connected smartphone is exposed to serious risks. Most smartphone malware detection techniques are to perform an independent detection (perform the detection of a single target application). In this paper, we analyzed a variety of intrusion detection techniques. Based on the results of analysis propose an agent using the network IDS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title="android malware detection">android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a>, <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title=" android malware detection"> android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a> </p> <a href="https://publications.waset.org/abstracts/39330/suggestion-for-malware-detection-agent-considering-network-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3522</span> Improved Skin Detection Using Colour Space and Texture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Medjram%20Sofiane">Medjram Sofiane</a>, <a href="https://publications.waset.org/abstracts/search?q=Babahenini%20Mohamed%20Chaouki"> Babahenini Mohamed Chaouki</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Benali%20Yamina"> Mohamed Benali Yamina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin detection is an important task for computer vision systems. A good method for skin detection means a good and successful result of the system. The colour is a good descriptor that allows us to detect skin colour in the images, but because of lightings effects and objects that have a similar colour skin, skin detection becomes difficult. In this paper, we proposed a method using the YCbCr colour space for skin detection and lighting effects elimination, then we use the information of texture to eliminate the false regions detected by the YCbCr colour skin model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title="skin detection">skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YCbCr" title=" YCbCr"> YCbCr</a>, <a href="https://publications.waset.org/abstracts/search?q=GLCM" title=" GLCM"> GLCM</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20skin" title=" human skin"> human skin</a> </p> <a href="https://publications.waset.org/abstracts/19039/improved-skin-detection-using-colour-space-and-texture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3521</span> Real-Time Detection of Space Manipulator Self-Collision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Xiaodong">Zhang Xiaodong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tang%20Zixin"> Tang Zixin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liu%20Xin"> Liu Xin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder enveloping surface, and then the detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space%20manipulator" title="space manipulator">space manipulator</a>, <a href="https://publications.waset.org/abstracts/search?q=collision%20detection" title=" collision detection"> collision detection</a>, <a href="https://publications.waset.org/abstracts/search?q=self-collision" title=" self-collision"> self-collision</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20real-time%20collision%20detection" title=" the real-time collision detection"> the real-time collision detection</a> </p> <a href="https://publications.waset.org/abstracts/23258/real-time-detection-of-space-manipulator-self-collision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23258.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3520</span> Iris Detection on RGB Image for Controlling Side Mirror</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Norzalina%20Othman">Norzalina Othman</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Na%E2%80%99imy%20Wan"> Nurul Na’imy Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Azliza%20Mohd%20Rusli"> Azliza Mohd Rusli</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Noor%20Syahirah%20Meor%20Idris"> Wan Noor Syahirah Meor Idris</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20detection" title="iris detection">iris detection</a>, <a href="https://publications.waset.org/abstracts/search?q=midpoint%20coordinates" title=" midpoint coordinates"> midpoint coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20images" title=" RGB images"> RGB images</a>, <a href="https://publications.waset.org/abstracts/search?q=side%20mirror" title=" side mirror"> side mirror</a> </p> <a href="https://publications.waset.org/abstracts/8133/iris-detection-on-rgb-image-for-controlling-side-mirror" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3519</span> Automatic Vehicle Detection Using Circular Synthetic Aperture Radar Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leping%20Chen">Leping Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Daoxiang%20An"> Daoxiang An</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaotao%20Huang"> Xiaotao Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic vehicle detection using synthetic aperture radar (SAR) image has been widely researched, as well as using optical remote sensing images. However, most researches treat the detection as an independent problem, failing to make full use of SAR data information. In circular SAR (CSAR), the two long borders of vehicle will shrink if the imaging surface is set higher than the reference one. Based on above variance, an automatic vehicle detection using CSAR image is proposed to enhance detection ability under complex environment, such as vehicles’ closely packing, which confuses the detector. The detection method uses the multiple images generated by different height plane to obtain an energy-concentrated image for detecting and then uses the maximally stable extremal regions method (MSER) to detect vehicles. A result of vehicles’ detection is given to verify the effectiveness and correctness of proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=circular%20SAR" title="circular SAR">circular SAR</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic" title=" automatic"> automatic</a>, <a href="https://publications.waset.org/abstracts/search?q=imaging" title=" imaging"> imaging</a> </p> <a href="https://publications.waset.org/abstracts/84548/automatic-vehicle-detection-using-circular-synthetic-aperture-radar-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=118">118</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=119">119</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=shadow%20detection&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10