CINXE.COM
Search results for: vision based
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: vision based</title> <meta name="description" content="Search results for: vision based"> <meta name="keywords" content="vision based"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="vision based" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="vision based"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 28806</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: vision based</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28806</span> A Systematic Categorization of Arguments against the Vision Zero Goal: A Literature Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Henok%20Girma%20Abebe">Henok Girma Abebe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Vision Zero is a long-term goal of preventing all road traffic fatalities and serious injuries which was first adopted in Sweden in 1997. It is based on the assumption that death and serious injury in the road system is morally unacceptable. In order to approach this end, vision zero has put in place strategies that are radically different from the traditional safety work. The vision zero, for instance, promoted the adoption of the best available technology to promote safety, and placed the ultimate responsibility for traffic safety on system designers. Despite Vision Zero’s moral appeal and its expansion to different safety areas and also parts of the world, important philosophical concerns related to the adoption and implementation of the vision zero remain to be addressed. Moreover, the vision zero goal has been criticized on different grounds. The aim of this paper is to identify and systematically categorize criticisms that have been put forward against vision zero. The findings of the paper are solely based on a critical analysis of secondary sources and snowball method is employed to identify the relevant philosophical and empirical literatures. Two general categories of criticisms on the vision zero goal are identified. The first category consists of criticisms that target the setting of vision zero as a ‘goal’ and some of the basic assumptions upon which the goal is based. Among others, the goal of achieving zero fatalities and serious injuries, together with vision zero’s lexicographical prioritization of safety has been criticized as unrealistic. The second category consists of criticisms that target the strategies put in place to achieve the goal of zero fatalities and serious injuries. For instance, Vision zero’s responsibility ascription for road safety and its rejection of cost-benefit analysis in the formulation and adoption of safety measures has both been criticized as counterproductive. In this category also falls the criticism that Vision Zero safety measures tend to be too paternalistic. Significant improvements have been recorded in road safety work since the adoption of vision zero, however, for the vision zero to even succeed more, it is important that issues and criticisms of philosophical nature associated with it are identified and critically dealt with. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=criticisms" title="criticisms">criticisms</a>, <a href="https://publications.waset.org/abstracts/search?q=systems%20approach" title=" systems approach"> systems approach</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20safety" title=" traffic safety"> traffic safety</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20zero" title=" vision zero"> vision zero</a> </p> <a href="https://publications.waset.org/abstracts/97167/a-systematic-categorization-of-arguments-against-the-vision-zero-goal-a-literature-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28805</span> Visual Improvement with Low Vision Aids in Children with Stargardt’s Disease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anum%20Akhter">Anum Akhter</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Altaf"> Sumaira Altaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the effect of low vision devices i.e. telescope and magnifying glasses on distance visual acuity and near visual acuity of children with Stargardt’s disease. Setting: Low vision department, Alshifa Trust Eye Hospital, Rawalpindi, Pakistan. Methods: 52 children having Stargardt’s disease were included in the study. All children were diagnosed by pediatrics ophthalmologists. Comprehensive low vision assessment was done by me in Low vision clinic. Visual acuity was measured using ETDRS chart. Refraction and other supplementary tests were performed. Children with Stargardt’s disease were provided with different telescopes and magnifying glasses for improving far vision and near vision. Results: Out of 52 children, 17 children were males and 35 children were females. Distance visual acuity and near visual acuity improved significantly with low vision aid trial. All children showed visual acuity better than 6/19 with a telescope of higher magnification. Improvement in near visual acuity was also significant with magnifying glasses trial. Conclusions: Low vision aids are useful for improvement in visual acuity in children. Children with Stargardt’s disease who are having a problem in education and daily life activities can get help from low vision aids. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stargardt" title="Stargardt">Stargardt</a>, <a href="https://publications.waset.org/abstracts/search?q=s%20disease" title="s disease">s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision%20aids" title=" low vision aids"> low vision aids</a>, <a href="https://publications.waset.org/abstracts/search?q=telescope" title=" telescope"> telescope</a>, <a href="https://publications.waset.org/abstracts/search?q=magnifiers" title=" magnifiers"> magnifiers</a> </p> <a href="https://publications.waset.org/abstracts/24382/visual-improvement-with-low-vision-aids-in-children-with-stargardts-disease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">539</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28804</span> Human Motion Capture: New Innovations in the Field of Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Najm%20Alotaibi">Najm Alotaibi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20capture" title="human motion capture">human motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision-based" title=" vision-based"> vision-based</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/22770/human-motion-capture-new-innovations-in-the-field-of-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22770.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28803</span> Performance Analysis of Vision-Based Transparent Obstacle Avoidance for Construction Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siwei%20Chang">Siwei Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Heng%20Li"> Heng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Haitao%20Wu"> Haitao Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Fang"> Xin Fang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Construction robots are receiving more and more attention as a promising solution to the manpower shortage issue in the construction industry. The development of intelligent control techniques that assist in controlling the robots to avoid transparency and reflected building obstacles is crucial for guaranteeing the adaptability and flexibility of mobile construction robots in complex construction environments. With the boom of computer vision techniques, a number of studies have proposed vision-based methods for transparent obstacle avoidance to improve operation accuracy. However, vision-based methods are also associated with disadvantages such as high computational costs. To provide better perception and value evaluation, this study aims to analyze the performance of vision-based techniques for avoiding transparent building obstacles. To achieve this, commonly used sensors, including a lidar, an ultrasonic sensor, and a USB camera, are equipped on the robotic platform to detect obstacles. A Raspberry Pi 3 computer board is employed to compute data collecting and control algorithms. The turtlebot3 burger is employed to test the programs. On-site experiments are carried out to observe the performance in terms of success rate and detection distance. Control variables include obstacle shapes and environmental conditions. The findings contribute to demonstrating how effectively vision-based obstacle avoidance strategies for transparent building obstacle avoidance and provide insights and informed knowledge when introducing computer vision techniques in the aforementioned domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=construction%20robot" title="construction robot">construction robot</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20avoidance" title=" obstacle avoidance"> obstacle avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=transparent%20obstacle" title=" transparent obstacle"> transparent obstacle</a> </p> <a href="https://publications.waset.org/abstracts/165433/performance-analysis-of-vision-based-transparent-obstacle-avoidance-for-construction-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165433.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28802</span> Examining the Significance of Service Learning in Driving the Purpose of a Rural-Based University in South Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Maphosa">C. Maphosa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ndileleni%20Mudzielwana"> Ndileleni Mudzielwana</a>, <a href="https://publications.waset.org/abstracts/search?q=Lufuno%20Phillip%20Netshifhefhe"> Lufuno Phillip Netshifhefhe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In line with established mission and vision, a university articulates its focus and purpose of existence. The conduct of business in a university should be for the furtherance of the mission and vision. Teaching and learning should play a pivotal role in driving the purpose of a university. In this paper, the researchers examine how service learning could be significant in driving the purpose of a rural-based university whose focus is to promote rural development. The importance of institutions’ vision and mission statement is explored and the vision and mission of the said university examined closely. The concept rural development and the contribution of a university in its promotion is discussed. Service learning as a teaching and learning approach is examined and its significance in driving the purpose of a rural-based university explained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=relevance" title="relevance">relevance</a>, <a href="https://publications.waset.org/abstracts/search?q=differentiation" title=" differentiation"> differentiation</a>, <a href="https://publications.waset.org/abstracts/search?q=purpose" title=" purpose"> purpose</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching" title=" teaching"> teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/52164/examining-the-significance-of-service-learning-in-driving-the-purpose-of-a-rural-based-university-in-south-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52164.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28801</span> Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20C.%20Chen">Joseph C. Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DMAIC" title="DMAIC">DMAIC</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision%20system" title=" machine vision system"> machine vision system</a>, <a href="https://publications.waset.org/abstracts/search?q=process%20capability" title=" process capability"> process capability</a>, <a href="https://publications.waset.org/abstracts/search?q=Taguchi%20Parameter%20Design" title=" Taguchi Parameter Design"> Taguchi Parameter Design</a> </p> <a href="https://publications.waset.org/abstracts/68243/optimizing-machine-vision-system-setup-accuracy-by-six-sigma-dmaic-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">437</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28800</span> Analysis of Public Space Usage Characteristics Based on Computer Vision Technology - Taking Shaping Park as an Example</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guantao%20Bai">Guantao Bai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Public space is an indispensable and important component of the urban built environment. How to more accurately evaluate the usage characteristics of public space can help improve its spatial quality. Compared to traditional survey methods, computer vision technology based on deep learning has advantages such as dynamic observation and low cost. This study takes the public space of Shaping Park as an example and, based on deep learning computer vision technology, processes and analyzes the image data of the public space to obtain the spatial usage characteristics and spatiotemporal characteristics of the public space. Research has found that the spontaneous activity time in public spaces is relatively random with a relatively short average activity time, while social activities have a relatively stable activity time with a longer average activity time. Computer vision technology based on deep learning can effectively describe the spatial usage characteristics of the research area, making up for the shortcomings of traditional research methods and providing relevant support for creating a good public space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20spaces" title=" public spaces"> public spaces</a>, <a href="https://publications.waset.org/abstracts/search?q=using%20features" title=" using features"> using features</a> </p> <a href="https://publications.waset.org/abstracts/173323/analysis-of-public-space-usage-characteristics-based-on-computer-vision-technology-taking-shaping-park-as-an-example" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28799</span> Inspection of Railway Track Fastening Elements Using Artificial Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelkrim%20Belhaoua">Abdelkrim Belhaoua</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Pierre%20Radoux"> Jean-Pierre Radoux</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In France, the railway network is one of the main transport infrastructures and is the second largest European network. Therefore, railway inspection is an important task in railway maintenance to ensure safety for passengers using significant means in personal and technical facilities. Artificial vision has recently been applied to several railway applications due to its potential to improve the efficiency and accuracy when analyzing large databases of acquired images. In this paper, we present a vision system able to detect fastening elements based on artificial vision approach. This system acquires railway images using a CCD camera installed under a control carriage. These images are stitched together before having processed. Experimental results are presented to show that the proposed method is robust for detection fasteners in a complex environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=railway%20inspection" title=" railway inspection"> railway inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20stitching" title=" image stitching"> image stitching</a>, <a href="https://publications.waset.org/abstracts/search?q=fastener%20recognition" title=" fastener recognition"> fastener recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/38749/inspection-of-railway-track-fastening-elements-using-artificial-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38749.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">455</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28798</span> Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omair%20Ghori">Omair Ghori</a>, <a href="https://publications.waset.org/abstracts/search?q=Anton%20Stadler"> Anton Stadler</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Wilk"> Stefan Wilk</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20Effelsberg"> Wolfgang Effelsberg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20analysis" title="contrast analysis">contrast analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20fire%20detection" title=" early fire detection"> early fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20smoke%20detection" title=" video smoke detection"> video smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/52006/video-based-ambient-smoke-detection-by-detecting-directional-contrast-decrease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28797</span> Cone Contrast Sensitivity of Normal Trichromats and Those with Red-Green Dichromats</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tatsuya%20Iizuka">Tatsuya Iizuka</a>, <a href="https://publications.waset.org/abstracts/search?q=Takushi%20Kawamorita"> Takushi Kawamorita</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomoya%20Handa"> Tomoya Handa</a>, <a href="https://publications.waset.org/abstracts/search?q=Hitoshi%20Ishikawa"> Hitoshi Ishikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We report normative cone contrast sensitivity values and sensitivity and specificity values for a computer-based color vision test, the cone contrast test-HD (CCT-HD). The participants included 50 phakic eyes with normal color vision (NCV) and 20 dichromatic eyes (ten with protanopia and ten with deuteranopia). The CCT-HD was used to measure L, M, and S-CCT-HD scores (color vision deficiency, L-, M-cone logCS≦1.65, S-cone logCS≦0.425) to investigate the sensitivity and specificity of CCT-HD based on anomalous-type diagnosis with animalscope. The mean ± standard error L-, M-, S-cone logCS for protanopia were 0.90±0.04, 1.65±0.03, and 0.63±0.02, respectively; for deuteranopia 1.74±0.03, 1.31±0.03, and 0.61±0.06, respectively; and for age-matched NCV were 1.89±0.04, 1.84±0.04, and 0.60±0.03, respectively, with significant differences for each group except for S-CCT-HD (Bonferroni corrected α = 0.0167, p < 0.0167). The sensitivity and specificity of CCT-HD were 100% for protan and deutan in diagnosing abnormal types from 20 to 64 years of age, but the specificity decreased to 65% for protan and 55% for deutan in older persons > 65. CCT-HD is comparable to the diagnostic performance of the anomalous type in the anomaloscope for the 20-64-year-old age group. However, the results should be interpreted cautiously in those ≥ 65 years. They are more susceptible to acquired color vision deficiencies due to the yellowing of the crystalline lens and other factors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20test%20HD" title="cone contrast test HD">cone contrast test HD</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20vision%20test" title=" color vision test"> color vision test</a>, <a href="https://publications.waset.org/abstracts/search?q=congenital%20color%20vision%20deficiency" title=" congenital color vision deficiency"> congenital color vision deficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=red-green%20dichromacy" title=" red-green dichromacy"> red-green dichromacy</a>, <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20sensitivity" title=" cone contrast sensitivity"> cone contrast sensitivity</a> </p> <a href="https://publications.waset.org/abstracts/159154/cone-contrast-sensitivity-of-normal-trichromats-and-those-with-red-green-dichromats" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159154.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28796</span> Development of Agricultural Robotic Platform for Inter-Row Plant: An Autonomous Navigation Based on Machine Vision </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alaa%20El-Din%20Rezk">Alaa El-Din Rezk </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Egypt, management of crops still away from what is being used today by utilizing the advances of mechanical design capabilities, sensing and electronics technology. These technologies have been introduced in many places and recorm, for Straight Path, Curved Path, Sine Wave ded high accuracy in different field operations. So, an autonomous robotic platform based on machine vision has been developed and constructed to be implemented in Egyptian conditions as self-propelled mobile vehicle for carrying tools for inter/intra-row crop management based on different control modules. The experiments were carried out at plant protection research institute (PPRI) during 2014-2015 to optimize the accuracy of agricultural robotic platform control using machine vision in term of the autonomous navigation and performance of the robot’s guidance system. Results showed that the robotic platform' guidance system with machine vision was able to adequately distinguish the path and resisted image noise and did better than human operators for getting less lateral offset error. The average error of autonomous was 2.75, 19.33, 21.22, 34.18, and 16.69 mm. while the human operator was 32.70, 4.85, 7.85, 38.35 and 14.75 mm Path, Offset Discontinuity and Angle Discontinuity respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20robotic" title="autonomous robotic">autonomous robotic</a>, <a href="https://publications.waset.org/abstracts/search?q=Hough%20transform" title=" Hough transform"> Hough transform</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision "> machine vision </a> </p> <a href="https://publications.waset.org/abstracts/43565/development-of-agricultural-robotic-platform-for-inter-row-plant-an-autonomous-navigation-based-on-machine-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43565.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28795</span> Powerful Laser Diode Matrixes for Active Vision Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dzmitry%20M.%20Kabanau">Dzmitry M. Kabanau</a>, <a href="https://publications.waset.org/abstracts/search?q=Vladimir%20V.%20Kabanov"> Vladimir V. Kabanov</a>, <a href="https://publications.waset.org/abstracts/search?q=Yahor%20V.%20Lebiadok"> Yahor V. Lebiadok</a>, <a href="https://publications.waset.org/abstracts/search?q=Denis%20V.%20Shabrov"> Denis V. Shabrov</a>, <a href="https://publications.waset.org/abstracts/search?q=Pavel%20V.%20Shpak"> Pavel V. Shpak</a>, <a href="https://publications.waset.org/abstracts/search?q=Gevork%20T.%20Mikaelyan"> Gevork T. Mikaelyan</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandr%20P.%20Bunichev"> Alexandr P. Bunichev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article is deal with the experimental investigations of the laser diode matrixes (LDM) based on the AlGaAs/GaAs heterostructures (lasing wavelength 790-880 nm) to find optimal LDM parameters for active vision systems. In particular, the dependence of LDM radiation pulse power on the pulse duration and LDA active layer heating as well as the LDM radiation divergence are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20vision%20systems" title="active vision systems">active vision systems</a>, <a href="https://publications.waset.org/abstracts/search?q=laser%20diode%20matrixes" title=" laser diode matrixes"> laser diode matrixes</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20properties" title=" thermal properties"> thermal properties</a>, <a href="https://publications.waset.org/abstracts/search?q=radiation%20divergence" title=" radiation divergence"> radiation divergence</a> </p> <a href="https://publications.waset.org/abstracts/19451/powerful-laser-diode-matrixes-for-active-vision-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">612</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28794</span> A Combined Approach Based on Artificial Intelligence and Computer Vision for Qualitative Grading of Rice Grains</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hemad%20Zareiforoush">Hemad Zareiforoush</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Minaei"> Saeed Minaei</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Banakar"> Ahmad Banakar</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Reza%20Alizadeh"> Mohammad Reza Alizadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The quality inspection of rice (Oryza sativa L.) during its various processing stages is very important. In this research, an artificial intelligence-based model coupled with computer vision techniques was developed as a decision support system for qualitative grading of rice grains. For conducting the experiments, first, 25 samples of rice grains with different levels of percentage of broken kernels (PBK) and degree of milling (DOM) were prepared and their qualitative grade was assessed by experienced experts. Then, the quality parameters of the same samples examined by experts were determined using a machine vision system. A grading model was developed based on fuzzy logic theory in MATLAB software for making a relationship between the qualitative characteristics of the product and its quality. Totally, 25 rules were used for qualitative grading based on AND operator and Mamdani inference system. The fuzzy inference system was consisted of two input linguistic variables namely, DOM and PBK, which were obtained by the machine vision system, and one output variable (quality of the product). The model output was finally defuzzified using Center of Maximum (COM) method. In order to evaluate the developed model, the output of the fuzzy system was compared with experts’ assessments. It was revealed that the developed model can estimate the qualitative grade of the product with an accuracy of 95.74%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title="machine vision">machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic" title=" fuzzy logic"> fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=rice" title=" rice"> rice</a>, <a href="https://publications.waset.org/abstracts/search?q=quality" title=" quality"> quality</a> </p> <a href="https://publications.waset.org/abstracts/9943/a-combined-approach-based-on-artificial-intelligence-and-computer-vision-for-qualitative-grading-of-rice-grains" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28793</span> Vision Based People Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boukerch%20Haroun">Boukerch Haroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Luo%20Qing%20Sheng"> Luo Qing Sheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Hua%20Shi"> Li Hua Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Boukraa%20Sebti"> Boukraa Sebti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camshift%20algorithm" title="camshift algorithm">camshift algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a> </p> <a href="https://publications.waset.org/abstracts/2264/vision-based-people-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28792</span> Glaucoma Detection in Retinal Tomography Using the Vision Transformer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sushish%20Baral">Sushish Baral</a>, <a href="https://publications.waset.org/abstracts/search?q=Pratibha%20Joshi"> Pratibha Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yaman%20Maharjan"> Yaman Maharjan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=glaucoma" title="glaucoma">glaucoma</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20architectures" title=" convolutional architectures"> convolutional architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20fundus%20images" title=" retinal fundus images"> retinal fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/141224/glaucoma-detection-in-retinal-tomography-using-the-vision-transformer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141224.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28791</span> Texture Identification Using Vision System: A Method to Predict Functionality of a Component</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Varsha%20Singh">Varsha Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Shraddha%20Prajapati"> Shraddha Prajapati</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20B.%20Kiran"> M. B. Kiran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture identification is useful in predicting the functionality of a component. Many of the existing texture identification methods are of contact in nature, which limits its measuring speed. These contact measurement techniques use a diamond stylus and the diamond stylus being sharp going to damage the surface under inspection and hence these techniques can be used in statistical sampling. Though these contact methods are very accurate, they do not give complete information for full characterization of surface. In this context, the presented method assumes special significance. The method uses a relatively low cost vision system for image acquisition. Software is developed based on wavelet transform, for analyzing texture images. Specimens are made using different manufacturing process (shaping, grinding, milling etc.) During experimentation, the specimens are illuminated using proper lighting and texture images a capture using CCD camera connected to the vision system. The software installed in the vision system processes these images and subsequently identify the texture of manufacturing processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diamond%20stylus" title="diamond stylus">diamond stylus</a>, <a href="https://publications.waset.org/abstracts/search?q=manufacturing%20process" title=" manufacturing process"> manufacturing process</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20identification" title=" texture identification"> texture identification</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20system" title=" vision system"> vision system</a> </p> <a href="https://publications.waset.org/abstracts/61722/texture-identification-using-vision-system-a-method-to-predict-functionality-of-a-component" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61722.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28790</span> How Envisioning Process Is Constructed: An Exploratory Research Comparing Three International Public Televisions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20Bedard">Alexandre Bedard</a>, <a href="https://publications.waset.org/abstracts/search?q=Johane%20Brunet"> Johane Brunet</a>, <a href="https://publications.waset.org/abstracts/search?q=Wendellyn%20Reid"> Wendellyn Reid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Public Television is constantly trying to maintain and develop its audience. And to achieve those goals, it needs a strong and clear vision. Vision or envision is a multidimensional process; it is simultaneously a conduit that orients and fixes the future, an idea that comes before the strategy and a mean by which action is accomplished, from a business perspective. Also, vision is often studied from a prescriptive and instrumental manner. Based on our understanding of the literature, we were able to explain how envisioning, as a process, is a creative one; it takes place in the mind and uses wisdom and intelligence through a process of evaluation, analysis and creation. Through an aggregation of the literature, we build a model of the envisioning process, based on past experiences, perceptions and knowledge and influenced by the context, being the individual, the organization and the environment. With exploratory research in which vision was deciphered through the discourse, through a qualitative and abductive approach and a grounded theory perspective, we explored three extreme cases, with eighteen interviews with experts, leaders, politicians, actors of the industry, etc. and more than twenty hours of interviews in three different countries. We compared the strategy, the business model, and the political and legal forces. We also looked at the history of each industry from an inertial point of view. Our analysis of the data revealed that a legitimacy effect due to the audience, the innovation and the creativity of the institutions was at the cornerstone of what would influence the envisioning process. This allowed us to identify how different the process was for Canadian, French and UK public broadcasters, although we concluded that the three of them had a socially constructed vision for their future, based on stakeholder management and an emerging role for the managers: ideas brokers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=envisioning%20process" title="envisioning process">envisioning process</a>, <a href="https://publications.waset.org/abstracts/search?q=international%20comparison" title=" international comparison"> international comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=television" title=" television"> television</a>, <a href="https://publications.waset.org/abstracts/search?q=vision" title=" vision"> vision</a> </p> <a href="https://publications.waset.org/abstracts/110919/how-envisioning-process-is-constructed-an-exploratory-research-comparing-three-international-public-televisions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110919.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28789</span> RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX Through Fusion of Vision and 3+1D Millimeter Wave Radar</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zixian%20Zhang">Zixian Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Shanliang%20Yao"> Shanliang Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Zile%20Huang"> Zile Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhaodong%20Wu"> Zhaodong Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaohui%20Zhu"> Xiaohui Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong%20Yue"> Yong Yue</a>, <a href="https://publications.waset.org/abstracts/search?q=Jieming%20Ma"> Jieming Ma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unmanned Surface Vehicles (USVs) are valuable due to their ability to perform dangerous and time-consuming tasks on the water. Object detection tasks are significant in these applications. However, inherent challenges, such as the complex distribution of obstacles, reflections from shore structures, water surface fog, etc., hinder the performance of object detection of USVs. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. MMW radar is complementary to vision sensors, providing robust environmental information. The radar 3D point cloud is transferred to 2D radar pseudo image to unify radar and vision information format by utilizing the point transformer. We propose a multi-source object detection network (RV-YOLOX )based on radar-vision fusion for inland waterways environment. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inland%20waterways" title="inland waterways">inland waterways</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLO" title=" YOLO"> YOLO</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20fusion" title=" sensor fusion"> sensor fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a> </p> <a href="https://publications.waset.org/abstracts/164399/rv-yolox-object-detection-on-inland-waterways-based-on-optimized-yolox-through-fusion-of-vision-and-31d-millimeter-wave-radar" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164399.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28788</span> The Role of Synthetic Data in Aerial Object Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ava%20Dodd">Ava Dodd</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Adams"> Jonathan Adams</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools, and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represents another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data" title=" synthetic data"> synthetic data</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv4" title=" YOLOv4"> YOLOv4</a> </p> <a href="https://publications.waset.org/abstracts/139194/the-role-of-synthetic-data-in-aerial-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139194.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">225</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28787</span> Research of Control System for Space Intelligent Robot Based on Vision Servo</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Changchun%20Liang">Changchun Liang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaodong%20Zhang"> Xiaodong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Liu"> Xin Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Pengfei%20Sun"> Pengfei Sun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Space intelligent robotic systems are expected to play an increasingly important role in the future. The robotic on-orbital service, whose key is the tracking and capturing technology, becomes research hot in recent years. In this paper, the authors propose a vision servo control system for target capturing. Robotic manipulator will be an intelligent robotic system with large-scale movement, functional agility, and autonomous ability, and it can be operated by astronauts in the space station or be controlled by the ground operator in the remote operation mode. To realize the autonomous movement and capture mission of SRM, a kind of autonomous programming strategy based on multi-camera vision fusion is designed and the selection principle of object visual position and orientation measurement information is defined for the better precision. Distributed control system hierarchy is designed and reliability is considering to guarantee the abilities of control system. At last, a ground experiment system is set up based on the concept of robotic control system. With that, the autonomous target capturing experiments are conducted. The experiment results validate the proposed algorithm, and demonstrates that the control system can fulfill the needs of function, real-time and reliability. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=control%20system" title="control system">control system</a>, <a href="https://publications.waset.org/abstracts/search?q=on-orbital%20service" title=" on-orbital service"> on-orbital service</a>, <a href="https://publications.waset.org/abstracts/search?q=space%20robot" title=" space robot"> space robot</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20servo" title=" vision servo"> vision servo</a> </p> <a href="https://publications.waset.org/abstracts/23241/research-of-control-system-for-space-intelligent-robot-based-on-vision-servo" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23241.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28786</span> Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Virender%20Singh">Virender Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mathew%20Rees"> Mathew Rees</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Hampton"> Simon Hampton</a>, <a href="https://publications.waset.org/abstracts/search?q=Sivaram%20Annadurai"> Sivaram Annadurai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=plant%20identification" title="plant identification">plant identification</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/162359/plant-identification-using-convolution-neural-network-and-vision-transformer-based-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28785</span> Functional Vision of Older People in Galician Nursing Homes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20V%C3%A1zquez">C. Vázquez</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20M.%20Gigirey"> L. M. Gigirey</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20P.%20del%20Oro"> C. P. del Oro</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Seoane"> S. Seoane </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Early detection of visual problems plays a key role in the aging process. However, although vision problems are common among older people, the percentage of aging people who perform regular optometric exams is low. In fact, uncorrected refractive errors are one of the main causes of visual impairment in this group of the population. Purpose: To evaluate functional vision of older residents in order to show the urgent need of visual screening programs in Galician nursing homes. Methodology: We examined 364 older adults aged 65 years and over. To measure vision of the daily living, we tested distance and near presenting visual acuity (binocular visual acuity with habitual correction if warn, directional E-Snellen) Presenting near vision was tested at the usual working distance. We defined visual impairment (distance and near) as a presenting visual acuity less than 0.3. Exclusion criteria included immobilized residents unable to reach the USC Dual Sensory Loss Unit for visual screening. Association between categorical variables was performed using chi-square tests. We used Pearson and Spearman correlation tests and the variance analysis to determine differences between groups of interest. Results: 23,1% of participants have visual impairment for distance vision and 16,4% for near vision. The percentage of residents with far and near visual impairment reaches 8,2%. As expected, prevalence of visual impairment increases with age. No differences exist with regard to the level of functional vision between gender. Differences exist between age group respect to distance vision, but not in case of near vision. Conclusion: prevalence of visual impairment is high among the older people tested in this pilot study. This means a high percentage of older people with limitations in their daily life activities. It is necessary to develop an effective vision screening program for early detection of vision problems in Galician nursing homes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=functional%20vision" title="functional vision">functional vision</a>, <a href="https://publications.waset.org/abstracts/search?q=elders" title=" elders"> elders</a>, <a href="https://publications.waset.org/abstracts/search?q=aging" title=" aging"> aging</a>, <a href="https://publications.waset.org/abstracts/search?q=nursing%20homes" title=" nursing homes"> nursing homes</a> </p> <a href="https://publications.waset.org/abstracts/17989/functional-vision-of-older-people-in-galician-nursing-homes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28784</span> Objects Tracking in Catadioptric Images Using Spherical Snake</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khald%20Anisse">Khald Anisse</a>, <a href="https://publications.waset.org/abstracts/search?q=Amina%20Radgui"> Amina Radgui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Rziza"> Mohammed Rziza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tracking objects on video sequences is a very challenging task in many works in computer vision applications. However, there is no article that treats this topic in catadioptric vision. This paper is an attempt that tries to describe a new approach of omnidirectional images processing based on inverse stereographic projection in the half-sphere. We used the spherical model proposed by Gayer and al. For object tracking, our work is based on snake method, with optimization using the Greedy algorithm, by adapting its different operators. The algorithm will respect the deformed geometries of omnidirectional images such as spherical neighborhood, spherical gradient and reformulation of optimization algorithm on the spherical domain. This tracking method that we call "spherical snake" permitted to know the change of the shape and the size of object in different replacements in the spherical image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=spherical%20snake" title=" spherical snake"> spherical snake</a>, <a href="https://publications.waset.org/abstracts/search?q=omnidirectional%20image" title=" omnidirectional image"> omnidirectional image</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20stereographic%20projection" title=" inverse stereographic projection"> inverse stereographic projection</a> </p> <a href="https://publications.waset.org/abstracts/2285/objects-tracking-in-catadioptric-images-using-spherical-snake" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2285.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28783</span> Causes of Blindness and Low Vision among Visually Impaired Population Supported by Welfare Organization in Ardabil Province in Iran</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Maeiyat">Mohammad Maeiyat</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Maeiyat%20Ivatlou"> Ali Maeiyat Ivatlou</a>, <a href="https://publications.waset.org/abstracts/search?q=Rasul%20Fani%20Khiavi"> Rasul Fani Khiavi</a>, <a href="https://publications.waset.org/abstracts/search?q=Abouzar%20Maeiyat%20Ivatlou"> Abouzar Maeiyat Ivatlou</a>, <a href="https://publications.waset.org/abstracts/search?q=Parya%20Maeiyat"> Parya Maeiyat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: Considering the fact that visual impairment is still one of the countries health problem, this study was conducted to determine the causes of blindness and low vision in visually impaired membership of Ardabil Province welfare organization. Methods: The present study which was based on descriptive and national-census, that carried out in visually impaired population supported by welfare organization in all urban and rural areas of Ardabil Province in 2013 and Collection of samples lasted for 7 months. The subjects were inspected by optometrist to determine their visual status (blindness or low vision) and then referred to ophthalmologist in order to discover the main causes of visual impairment based on the international classification of diseases version 10. Statistical analysis of collected data was performed using SPSS software version 18. Results: Overall, 403 subjects with mean age of years participated in this study. 73.2% were blind, 26.8 % were low vision and according gender grouping 60.50 % of them were male, 39.50 % were female that divided into three groups with the age level of lower than 15 (11.2%) 15 to 49 (76.7%), and 50 and higher (12.1%). The age range was 1 to 78 years. The causes of blindness and low vision were in descending order: optic atrophy (18.4%), retinitis pigmentosa (16.8%), corneal diseases (12.4%), chorioretinal diseases (9.4%), cataract (8.9%), glaucoma (8.2%), phthisis bulbi (7.2%), degenerative myopia (6.9%), microphtalmos ( 4%), amblyopia (3.2%), albinism (2.5%) and nistagmus (2%). Conclusion: in this study the main causes of visual impairments were optic atrophy and retinitis pigmentosa, thus specific prevention plans can be effective in reducing the incidence of visual disabilities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blindness" title="blindness">blindness</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision" title=" low vision"> low vision</a>, <a href="https://publications.waset.org/abstracts/search?q=welfare" title=" welfare"> welfare</a>, <a href="https://publications.waset.org/abstracts/search?q=ardabil" title=" ardabil"> ardabil</a> </p> <a href="https://publications.waset.org/abstracts/24741/causes-of-blindness-and-low-vision-among-visually-impaired-population-supported-by-welfare-organization-in-ardabil-province-in-iran" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24741.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28782</span> An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jie%20Zhao">Jie Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Meng%20Su"> Meng Su</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alexNet" title="alexNet">alexNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a>, <a href="https://publications.waset.org/abstracts/search?q=googleNet" title=" googleNet"> googleNet</a>, <a href="https://publications.waset.org/abstracts/search?q=resNet" title=" resNet"> resNet</a>, <a href="https://publications.waset.org/abstracts/search?q=Jetson%20nano" title=" Jetson nano"> Jetson nano</a>, <a href="https://publications.waset.org/abstracts/search?q=CUDA" title=" CUDA"> CUDA</a>, <a href="https://publications.waset.org/abstracts/search?q=COCO-NET" title=" COCO-NET"> COCO-NET</a>, <a href="https://publications.waset.org/abstracts/search?q=cifar10" title=" cifar10"> cifar10</a>, <a href="https://publications.waset.org/abstracts/search?q=imageNet%20large%20scale%20visual%20recognition%20challenge%20%28ILSVRC%29" title=" imageNet large scale visual recognition challenge (ILSVRC)"> imageNet large scale visual recognition challenge (ILSVRC)</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20colab" title=" google colab"> google colab</a> </p> <a href="https://publications.waset.org/abstracts/176759/an-evaluation-of-neural-network-efficacies-for-image-recognition-on-edge-ai-computer-vision-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28781</span> Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ranitha%20Guna%20Selvi%20D">Ranitha Guna Selvi D</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramakrishnan%20R"> Ramakrishnan R</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohideen%20Abdul%20Kader"> Mohideen Abdul Kader</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=refractive%20error" title="refractive error">refractive error</a>, <a href="https://publications.waset.org/abstracts/search?q=uncorrected%20refractive%20error" title=" uncorrected refractive error"> uncorrected refractive error</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20center" title=" vision center"> vision center</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20technician" title=" vision technician"> vision technician</a>, <a href="https://publications.waset.org/abstracts/search?q=teleconsultation" title=" teleconsultation"> teleconsultation</a> </p> <a href="https://publications.waset.org/abstracts/146361/role-of-vision-centers-in-eliminating-avoidable-blindness-caused-due-to-uncorrected-refractive-error-in-rural-south-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146361.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28780</span> Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20S.%20Nikitenko">M. S. Nikitenko</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20A.%20Kizilov"> S. A. Kizilov</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Y.%20Khudonogov"> D. Y. Khudonogov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title="machine vision">machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision%20operating%20system" title=" machine vision operating system"> machine vision operating system</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20markers" title=" light markers"> light markers</a>, <a href="https://publications.waset.org/abstracts/search?q=measuring%20capability" title=" measuring capability"> measuring capability</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20detection%20system" title=" obstacle detection system"> obstacle detection system</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous%20transport" title=" autonomous transport"> autonomous transport</a> </p> <a href="https://publications.waset.org/abstracts/159442/alternative-approach-to-the-machine-vision-system-operating-for-solving-industrial-control-issue" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28779</span> Vision-Based Hand Segmentation Techniques for Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jebali">M. Jebali</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jemni"> M. Jemni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20segmentation" title=" hand segmentation"> hand segmentation</a> </p> <a href="https://publications.waset.org/abstracts/26490/vision-based-hand-segmentation-techniques-for-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28778</span> An Investigation on Smartphone-Based Machine Vision System for Inspection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=They%20Shao%20Peng">They Shao Peng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine vision system for inspection is an automated technology that is normally utilized to analyze items on the production line for quality control purposes, it also can be known as an automated visual inspection (AVI) system. By applying automated visual inspection, the existence of items, defects, contaminants, flaws, and other irregularities in manufactured products can be easily detected in a short time and accurately. However, AVI systems are still inflexible and expensive due to their uniqueness for a specific task and consuming a lot of set-up time and space. With the rapid development of mobile devices, smartphones can be an alternative device for the visual system to solve the existing problems of AVI. Since the smartphone-based AVI system is still at a nascent stage, this led to the motivation to investigate the smartphone-based AVI system. This study is aimed to provide a low-cost AVI system with high efficiency and flexibility. In this project, the object detection models, which are You Only Look Once (YOLO) model and Single Shot MultiBox Detector (SSD) model, are trained, evaluated, and integrated with the smartphone and webcam devices. The performance of the smartphone-based AVI is compared with the webcam-based AVI according to the precision and inference time in this study. Additionally, a mobile application is developed which allows users to implement real-time object detection and object detection from image storage. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20visual%20inspection" title="automated visual inspection">automated visual inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20application" title=" mobile application"> mobile application</a> </p> <a href="https://publications.waset.org/abstracts/151908/an-investigation-on-smartphone-based-machine-vision-system-for-inspection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28777</span> Deep Learning based Image Classifiers for Detection of CSSVD in Cacao Plants</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atuhurra%20Jesse">Atuhurra Jesse</a>, <a href="https://publications.waset.org/abstracts/search?q=N%27guessan%20Yves-Roland%20Douha"> N'guessan Yves-Roland Douha</a>, <a href="https://publications.waset.org/abstracts/search?q=Pabitra%20Lenka"> Pabitra Lenka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection of diseases within plants has attracted a lot of attention from computer vision enthusiasts. Despite the progress made to detect diseases in many plants, there remains a research gap to train image classifiers to detect the cacao swollen shoot virus disease or CSSVD for short, pertinent to cacao plants. This gap has mainly been due to the unavailability of high quality labeled training data. Moreover, institutions have been hesitant to share their data related to CSSVD. To fill these gaps, image classifiers to detect CSSVD-infected cacao plants are presented in this study. The classifiers are based on VGG16, ResNet50 and Vision Transformer (ViT). The image classifiers are evaluated on a recently released and publicly accessible KaraAgroAI Cocoa dataset. The best performing image classifier, based on ResNet50, achieves 95.39\% precision, 93.75\% recall, 94.34\% F1-score and 94\% accuracy on only 20 epochs. There is a +9.75\% improvement in recall when compared to previous works. These results indicate that the image classifiers learn to identify cacao plants infected with CSSVD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CSSVD" title="CSSVD">CSSVD</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=KaraAgroAI%20cocoa%20dataset" title=" KaraAgroAI cocoa dataset"> KaraAgroAI cocoa dataset</a> </p> <a href="https://publications.waset.org/abstracts/169653/deep-learning-based-image-classifiers-for-detection-of-cssvd-in-cacao-plants" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=960">960</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=961">961</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20based&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>