CINXE.COM

Search results for: vision impairement

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: vision impairement</title> <meta name="description" content="Search results for: vision impairement"> <meta name="keywords" content="vision impairement"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="vision impairement" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="vision impairement"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1083</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: vision impairement</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1083</span> Visual Improvement with Low Vision Aids in Children with Stargardt’s Disease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anum%20Akhter">Anum Akhter</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Altaf"> Sumaira Altaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the effect of low vision devices i.e. telescope and magnifying glasses on distance visual acuity and near visual acuity of children with Stargardt’s disease. Setting: Low vision department, Alshifa Trust Eye Hospital, Rawalpindi, Pakistan. Methods: 52 children having Stargardt’s disease were included in the study. All children were diagnosed by pediatrics ophthalmologists. Comprehensive low vision assessment was done by me in Low vision clinic. Visual acuity was measured using ETDRS chart. Refraction and other supplementary tests were performed. Children with Stargardt’s disease were provided with different telescopes and magnifying glasses for improving far vision and near vision. Results: Out of 52 children, 17 children were males and 35 children were females. Distance visual acuity and near visual acuity improved significantly with low vision aid trial. All children showed visual acuity better than 6/19 with a telescope of higher magnification. Improvement in near visual acuity was also significant with magnifying glasses trial. Conclusions: Low vision aids are useful for improvement in visual acuity in children. Children with Stargardt’s disease who are having a problem in education and daily life activities can get help from low vision aids. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stargardt" title="Stargardt">Stargardt</a>, <a href="https://publications.waset.org/abstracts/search?q=s%20disease" title="s disease">s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision%20aids" title=" low vision aids"> low vision aids</a>, <a href="https://publications.waset.org/abstracts/search?q=telescope" title=" telescope"> telescope</a>, <a href="https://publications.waset.org/abstracts/search?q=magnifiers" title=" magnifiers"> magnifiers</a> </p> <a href="https://publications.waset.org/abstracts/24382/visual-improvement-with-low-vision-aids-in-children-with-stargardts-disease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1082</span> Navigating Life Transitions for Young People with Vision Impairment: A Community-Based Participatory Research Approach to Accessibility and Diversity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aikaterini%20Tavoulari">Aikaterini Tavoulari</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Proulx"> Michael Proulx</a>, <a href="https://publications.waset.org/abstracts/search?q=Karin%20Petrini"> Karin Petrini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: This study aims to explore the unique challenges faced by young individuals with vision impairment (VI) during key life transitions, utilizing a community-based participatory research (CBPR) approach to identify limitations and positive aspects of existing support systems, with a focus on accessibility and diversity. Design: The study employs a qualitative CBPR design, engaging young participants with VI through online and in-person working groups over six months, prioritizing their active involvement and diverse perspectives. Methods: Twenty-one young individuals with VI from across the UK and with different VI conditions were recruited to participate in the study via a climbing and virtual reality event and stakeholders’ support. Data collection methods included open discussions, forum exchanges, and qualitative questionnaires. The data were analyzed with NVivo using inductive thematic analysis to identify key themes and patterns related to the challenges and experiences of life transitions for this diverse population. Results: The analysis revealed barriers to accessibility, such as assumptions about what a person with VI can do, inaccessibility to material, noisy environments, and insufficient training with assistive technologies. Enablers included guidance from diverse professionals and peers, multisensory approaches (beyond tactile), and peer collaborations. This study underscores the need for developing accessible and tailored strategies together with these young people to address the specific needs of this diverse population during critical life transitions (e.g., to independent living, employment and higher education). Conclusion: Engaging and co-designing effective approaches and tools with young people with VI is key to tackling the specific accessibility barriers they encounter. These approaches should be targeted at different transitional periods of their life journey, promoting diversity and inclusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision%20impairement" title="vision impairement">vision impairement</a>, <a href="https://publications.waset.org/abstracts/search?q=life%20transitions" title=" life transitions"> life transitions</a>, <a href="https://publications.waset.org/abstracts/search?q=qualitative%20research" title=" qualitative research"> qualitative research</a>, <a href="https://publications.waset.org/abstracts/search?q=community-based%20participatory%20design" title=" community-based participatory design"> community-based participatory design</a>, <a href="https://publications.waset.org/abstracts/search?q=accessibility" title=" accessibility"> accessibility</a> </p> <a href="https://publications.waset.org/abstracts/185282/navigating-life-transitions-for-young-people-with-vision-impairment-a-community-based-participatory-research-approach-to-accessibility-and-diversity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185282.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">48</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1081</span> Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20C.%20Chen">Joseph C. Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DMAIC" title="DMAIC">DMAIC</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision%20system" title=" machine vision system"> machine vision system</a>, <a href="https://publications.waset.org/abstracts/search?q=process%20capability" title=" process capability"> process capability</a>, <a href="https://publications.waset.org/abstracts/search?q=Taguchi%20Parameter%20Design" title=" Taguchi Parameter Design"> Taguchi Parameter Design</a> </p> <a href="https://publications.waset.org/abstracts/68243/optimizing-machine-vision-system-setup-accuracy-by-six-sigma-dmaic-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1080</span> A Systematic Categorization of Arguments against the Vision Zero Goal: A Literature Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Henok%20Girma%20Abebe">Henok Girma Abebe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Vision Zero is a long-term goal of preventing all road traffic fatalities and serious injuries which was first adopted in Sweden in 1997. It is based on the assumption that death and serious injury in the road system is morally unacceptable. In order to approach this end, vision zero has put in place strategies that are radically different from the traditional safety work. The vision zero, for instance, promoted the adoption of the best available technology to promote safety, and placed the ultimate responsibility for traffic safety on system designers. Despite Vision Zero’s moral appeal and its expansion to different safety areas and also parts of the world, important philosophical concerns related to the adoption and implementation of the vision zero remain to be addressed. Moreover, the vision zero goal has been criticized on different grounds. The aim of this paper is to identify and systematically categorize criticisms that have been put forward against vision zero. The findings of the paper are solely based on a critical analysis of secondary sources and snowball method is employed to identify the relevant philosophical and empirical literatures. Two general categories of criticisms on the vision zero goal are identified. The first category consists of criticisms that target the setting of vision zero as a ‘goal’ and some of the basic assumptions upon which the goal is based. Among others, the goal of achieving zero fatalities and serious injuries, together with vision zero’s lexicographical prioritization of safety has been criticized as unrealistic. The second category consists of criticisms that target the strategies put in place to achieve the goal of zero fatalities and serious injuries. For instance, Vision zero’s responsibility ascription for road safety and its rejection of cost-benefit analysis in the formulation and adoption of safety measures has both been criticized as counterproductive. In this category also falls the criticism that Vision Zero safety measures tend to be too paternalistic. Significant improvements have been recorded in road safety work since the adoption of vision zero, however, for the vision zero to even succeed more, it is important that issues and criticisms of philosophical nature associated with it are identified and critically dealt with. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=criticisms" title="criticisms">criticisms</a>, <a href="https://publications.waset.org/abstracts/search?q=systems%20approach" title=" systems approach"> systems approach</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20safety" title=" traffic safety"> traffic safety</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20zero" title=" vision zero"> vision zero</a> </p> <a href="https://publications.waset.org/abstracts/97167/a-systematic-categorization-of-arguments-against-the-vision-zero-goal-a-literature-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1079</span> Inspection of Railway Track Fastening Elements Using Artificial Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelkrim%20Belhaoua">Abdelkrim Belhaoua</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Pierre%20Radoux"> Jean-Pierre Radoux</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In France, the railway network is one of the main transport infrastructures and is the second largest European network. Therefore, railway inspection is an important task in railway maintenance to ensure safety for passengers using significant means in personal and technical facilities. Artificial vision has recently been applied to several railway applications due to its potential to improve the efficiency and accuracy when analyzing large databases of acquired images. In this paper, we present a vision system able to detect fastening elements based on artificial vision approach. This system acquires railway images using a CCD camera installed under a control carriage. These images are stitched together before having processed. Experimental results are presented to show that the proposed method is robust for detection fasteners in a complex environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=railway%20inspection" title=" railway inspection"> railway inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20stitching" title=" image stitching"> image stitching</a>, <a href="https://publications.waset.org/abstracts/search?q=fastener%20recognition" title=" fastener recognition"> fastener recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/38749/inspection-of-railway-track-fastening-elements-using-artificial-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38749.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">453</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1078</span> The Role of Synthetic Data in Aerial Object Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ava%20Dodd">Ava Dodd</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Adams"> Jonathan Adams</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools, and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represents another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data" title=" synthetic data"> synthetic data</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv4" title=" YOLOv4"> YOLOv4</a> </p> <a href="https://publications.waset.org/abstracts/139194/the-role-of-synthetic-data-in-aerial-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139194.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">225</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1077</span> Functional Vision of Older People in Galician Nursing Homes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20V%C3%A1zquez">C. Vázquez</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20M.%20Gigirey"> L. M. Gigirey</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20P.%20del%20Oro"> C. P. del Oro</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Seoane"> S. Seoane </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Early detection of visual problems plays a key role in the aging process. However, although vision problems are common among older people, the percentage of aging people who perform regular optometric exams is low. In fact, uncorrected refractive errors are one of the main causes of visual impairment in this group of the population. Purpose: To evaluate functional vision of older residents in order to show the urgent need of visual screening programs in Galician nursing homes. Methodology: We examined 364 older adults aged 65 years and over. To measure vision of the daily living, we tested distance and near presenting visual acuity (binocular visual acuity with habitual correction if warn, directional E-Snellen) Presenting near vision was tested at the usual working distance. We defined visual impairment (distance and near) as a presenting visual acuity less than 0.3. Exclusion criteria included immobilized residents unable to reach the USC Dual Sensory Loss Unit for visual screening. Association between categorical variables was performed using chi-square tests. We used Pearson and Spearman correlation tests and the variance analysis to determine differences between groups of interest. Results: 23,1% of participants have visual impairment for distance vision and 16,4% for near vision. The percentage of residents with far and near visual impairment reaches 8,2%. As expected, prevalence of visual impairment increases with age. No differences exist with regard to the level of functional vision between gender. Differences exist between age group respect to distance vision, but not in case of near vision. Conclusion: prevalence of visual impairment is high among the older people tested in this pilot study. This means a high percentage of older people with limitations in their daily life activities. It is necessary to develop an effective vision screening program for early detection of vision problems in Galician nursing homes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=functional%20vision" title="functional vision">functional vision</a>, <a href="https://publications.waset.org/abstracts/search?q=elders" title=" elders"> elders</a>, <a href="https://publications.waset.org/abstracts/search?q=aging" title=" aging"> aging</a>, <a href="https://publications.waset.org/abstracts/search?q=nursing%20homes" title=" nursing homes"> nursing homes</a> </p> <a href="https://publications.waset.org/abstracts/17989/functional-vision-of-older-people-in-galician-nursing-homes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1076</span> Multichannel Object Detection with Event Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Iliasov">Rafael Iliasov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Golkar"> Alessandro Golkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection based on event vision has been a dynamically growing field in computer vision for the last 16 years. In this work, we create multiple channels from a single event camera and propose an event fusion method (EFM) to enhance object detection in event-based vision systems. Each channel uses a different accumulation buffer to collect events from the event camera. We implement YOLOv7 for object detection, followed by a fusion algorithm. Our multichannel approach outperforms single-channel-based object detection by 0.7% in mean Average Precision (mAP) for detection overlapping ground truth with IOU = 0.5. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20camera" title="event camera">event camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20with%20multimodal%20inputs" title=" object detection with multimodal inputs"> object detection with multimodal inputs</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20fusion" title=" multichannel fusion"> multichannel fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/190247/multichannel-object-detection-with-event-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1075</span> Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ranitha%20Guna%20Selvi%20D">Ranitha Guna Selvi D</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramakrishnan%20R"> Ramakrishnan R</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohideen%20Abdul%20Kader"> Mohideen Abdul Kader</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=refractive%20error" title="refractive error">refractive error</a>, <a href="https://publications.waset.org/abstracts/search?q=uncorrected%20refractive%20error" title=" uncorrected refractive error"> uncorrected refractive error</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20center" title=" vision center"> vision center</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20technician" title=" vision technician"> vision technician</a>, <a href="https://publications.waset.org/abstracts/search?q=teleconsultation" title=" teleconsultation"> teleconsultation</a> </p> <a href="https://publications.waset.org/abstracts/146361/role-of-vision-centers-in-eliminating-avoidable-blindness-caused-due-to-uncorrected-refractive-error-in-rural-south-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146361.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1074</span> Human Motion Capture: New Innovations in the Field of Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Najm%20Alotaibi">Najm Alotaibi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20capture" title="human motion capture">human motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision-based" title=" vision-based"> vision-based</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/22770/human-motion-capture-new-innovations-in-the-field-of-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22770.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1073</span> Development of a Computer Vision System for the Blind and Visually Impaired Person</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20C.%20Belleza">Rodrigo C. Belleza</a>, <a href="https://publications.waset.org/abstracts/search?q=Jr."> Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Roselyn%20A.%20Maa%C3%B1o"> Roselyn A. Maaño</a>, <a href="https://publications.waset.org/abstracts/search?q=Karl%20Patrick%20E.%20Camota"> Karl Patrick E. Camota</a>, <a href="https://publications.waset.org/abstracts/search?q=Darwin%20Kim%20Q.%20Bulawan"> Darwin Kim Q. Bulawan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may result from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithms" title="algorithms">algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20systems" title=" embedded systems"> embedded systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a> </p> <a href="https://publications.waset.org/abstracts/2016/development-of-a-computer-vision-system-for-the-blind-and-visually-impaired-person" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1072</span> Powerful Laser Diode Matrixes for Active Vision Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dzmitry%20M.%20Kabanau">Dzmitry M. Kabanau</a>, <a href="https://publications.waset.org/abstracts/search?q=Vladimir%20V.%20Kabanov"> Vladimir V. Kabanov</a>, <a href="https://publications.waset.org/abstracts/search?q=Yahor%20V.%20Lebiadok"> Yahor V. Lebiadok</a>, <a href="https://publications.waset.org/abstracts/search?q=Denis%20V.%20Shabrov"> Denis V. Shabrov</a>, <a href="https://publications.waset.org/abstracts/search?q=Pavel%20V.%20Shpak"> Pavel V. Shpak</a>, <a href="https://publications.waset.org/abstracts/search?q=Gevork%20T.%20Mikaelyan"> Gevork T. Mikaelyan</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandr%20P.%20Bunichev"> Alexandr P. Bunichev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article is deal with the experimental investigations of the laser diode matrixes (LDM) based on the AlGaAs/GaAs heterostructures (lasing wavelength 790-880 nm) to find optimal LDM parameters for active vision systems. In particular, the dependence of LDM radiation pulse power on the pulse duration and LDA active layer heating as well as the LDM radiation divergence are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20vision%20systems" title="active vision systems">active vision systems</a>, <a href="https://publications.waset.org/abstracts/search?q=laser%20diode%20matrixes" title=" laser diode matrixes"> laser diode matrixes</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20properties" title=" thermal properties"> thermal properties</a>, <a href="https://publications.waset.org/abstracts/search?q=radiation%20divergence" title=" radiation divergence"> radiation divergence</a> </p> <a href="https://publications.waset.org/abstracts/19451/powerful-laser-diode-matrixes-for-active-vision-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1071</span> Performance Analysis of Vision-Based Transparent Obstacle Avoidance for Construction Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siwei%20Chang">Siwei Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Heng%20Li"> Heng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Haitao%20Wu"> Haitao Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Fang"> Xin Fang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Construction robots are receiving more and more attention as a promising solution to the manpower shortage issue in the construction industry. The development of intelligent control techniques that assist in controlling the robots to avoid transparency and reflected building obstacles is crucial for guaranteeing the adaptability and flexibility of mobile construction robots in complex construction environments. With the boom of computer vision techniques, a number of studies have proposed vision-based methods for transparent obstacle avoidance to improve operation accuracy. However, vision-based methods are also associated with disadvantages such as high computational costs. To provide better perception and value evaluation, this study aims to analyze the performance of vision-based techniques for avoiding transparent building obstacles. To achieve this, commonly used sensors, including a lidar, an ultrasonic sensor, and a USB camera, are equipped on the robotic platform to detect obstacles. A Raspberry Pi 3 computer board is employed to compute data collecting and control algorithms. The turtlebot3 burger is employed to test the programs. On-site experiments are carried out to observe the performance in terms of success rate and detection distance. Control variables include obstacle shapes and environmental conditions. The findings contribute to demonstrating how effectively vision-based obstacle avoidance strategies for transparent building obstacle avoidance and provide insights and informed knowledge when introducing computer vision techniques in the aforementioned domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=construction%20robot" title="construction robot">construction robot</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20avoidance" title=" obstacle avoidance"> obstacle avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=transparent%20obstacle" title=" transparent obstacle"> transparent obstacle</a> </p> <a href="https://publications.waset.org/abstracts/165433/performance-analysis-of-vision-based-transparent-obstacle-avoidance-for-construction-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165433.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1070</span> Cone Contrast Sensitivity of Normal Trichromats and Those with Red-Green Dichromats</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tatsuya%20Iizuka">Tatsuya Iizuka</a>, <a href="https://publications.waset.org/abstracts/search?q=Takushi%20Kawamorita"> Takushi Kawamorita</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomoya%20Handa"> Tomoya Handa</a>, <a href="https://publications.waset.org/abstracts/search?q=Hitoshi%20Ishikawa"> Hitoshi Ishikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We report normative cone contrast sensitivity values and sensitivity and specificity values for a computer-based color vision test, the cone contrast test-HD (CCT-HD). The participants included 50 phakic eyes with normal color vision (NCV) and 20 dichromatic eyes (ten with protanopia and ten with deuteranopia). The CCT-HD was used to measure L, M, and S-CCT-HD scores (color vision deficiency, L-, M-cone logCS≦1.65, S-cone logCS≦0.425) to investigate the sensitivity and specificity of CCT-HD based on anomalous-type diagnosis with animalscope. The mean ± standard error L-, M-, S-cone logCS for protanopia were 0.90±0.04, 1.65±0.03, and 0.63±0.02, respectively; for deuteranopia 1.74±0.03, 1.31±0.03, and 0.61±0.06, respectively; and for age-matched NCV were 1.89±0.04, 1.84±0.04, and 0.60±0.03, respectively, with significant differences for each group except for S-CCT-HD (Bonferroni corrected α = 0.0167, p < 0.0167). The sensitivity and specificity of CCT-HD were 100% for protan and deutan in diagnosing abnormal types from 20 to 64 years of age, but the specificity decreased to 65% for protan and 55% for deutan in older persons > 65. CCT-HD is comparable to the diagnostic performance of the anomalous type in the anomaloscope for the 20-64-year-old age group. However, the results should be interpreted cautiously in those ≥ 65 years. They are more susceptible to acquired color vision deficiencies due to the yellowing of the crystalline lens and other factors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20test%20HD" title="cone contrast test HD">cone contrast test HD</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20vision%20test" title=" color vision test"> color vision test</a>, <a href="https://publications.waset.org/abstracts/search?q=congenital%20color%20vision%20deficiency" title=" congenital color vision deficiency"> congenital color vision deficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=red-green%20dichromacy" title=" red-green dichromacy"> red-green dichromacy</a>, <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20sensitivity" title=" cone contrast sensitivity"> cone contrast sensitivity</a> </p> <a href="https://publications.waset.org/abstracts/159154/cone-contrast-sensitivity-of-normal-trichromats-and-those-with-red-green-dichromats" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159154.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1069</span> The Education-Development Nexus: The Vision of International Organizations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thibaut%20Lauwerier">Thibaut Lauwerier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This presentation will cover the vision of international organizations on the link between development and education. This issue is very relevant to address the general topic of the conference. 'Educating for development' is indeed at the heart of their discourse. For most of international organizations involved in education, it is important to invest in this field since it is at the service of development. The idea of this presentation is to better understand the vision of development according to these international organizations and how education can contribute to this type of development. To address this issue, we conducted a comparative study of three major international organizations (OECD, UNESCO and World Bank) influencing education policy at the international level. The data come from the strategic reports of these organizations over the period 1990-2015. The results show that the visions of development refer mainly to the neoliberal agenda, despite evolutions, even contradictions. And so, education must increase productivity, improve economic growth, etc. UNESCO, which has a less narrow conception of the development and therefore the aims of education, does not have the same means as the two other organizations to advocate for an alternative vision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=development" title="development">development</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a>, <a href="https://publications.waset.org/abstracts/search?q=international%20organizations" title=" international organizations"> international organizations</a>, <a href="https://publications.waset.org/abstracts/search?q=poilcy" title=" poilcy"> poilcy</a> </p> <a href="https://publications.waset.org/abstracts/89396/the-education-development-nexus-the-vision-of-international-organizations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89396.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1068</span> A Review: Detection and Classification Defects on Banana and Apples by Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zahow%20Muoftah">Zahow Muoftah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditional manual visual grading of fruits has been one of the agricultural industry’s major challenges due to its laborious nature as well as inconsistency in the inspection and classification process. The main requirements for computer vision and visual processing are some effective techniques for identifying defects and estimating defect areas. Automated defect detection using computer vision and machine learning has emerged as a promising area of research with a high and direct impact on the visual inspection domain. Grading, sorting, and disease detection are important factors in determining the quality of fruits after harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have been conducted to identify diseases and pests that affect the fruits of agricultural crops. However, most previous studies concentrated solely on the diagnosis of a lesion or disease. This study focused on a comprehensive study to identify pests and diseases of apple and banana fruits using detection and classification defects on Banana and Apples by Computer Vision. As a result, the current article includes research from these domains as well. Finally, various pattern recognition techniques for detecting apple and banana defects are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=banana" title=" banana"> banana</a>, <a href="https://publications.waset.org/abstracts/search?q=apple" title=" apple"> apple</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/154514/a-review-detection-and-classification-defects-on-banana-and-apples-by-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154514.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">106</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1067</span> The Conception of Implementation of Vision for European Forensic Science 2020 in Lithuania</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Egl%C4%97%20Bilevi%C4%8Di%C5%ABt%C4%97">Eglė Bilevičiūtė</a>, <a href="https://publications.waset.org/abstracts/search?q=Vidmantas%20Egidijus%20Kurapka"> Vidmantas Egidijus Kurapka</a>, <a href="https://publications.waset.org/abstracts/search?q=Snieguol%C4%97%20Matulien%C4%97"> Snieguolė Matulienė</a>, <a href="https://publications.waset.org/abstracts/search?q=Sigut%C4%97%20Stankevi%C4%8Di%C5%ABt%C4%97"> Sigutė Stankevičiūtė</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Council of European Union (EU Council) has stressed on several occasions the need for a concerted, comprehensive and effective solution to delinquency problems in EU communities. In the context of establishing a European Forensic Science Area and the development of forensic science infrastructure in Europe, EU Council believes that forensic science can significantly contribute to the efficiency of law enforcement, crime prevention and combating crimes. Lithuanian scientists have consolidated to implement a project named “Conception of the vision for European Forensic Science 2020 implementation in Lithuania” (the project is funded for the period of 1 March 2014 - 31 December 2016) with the objective to create a conception of implementation of the vision for European Forensic Science 2020 in Lithuania by 1) evaluating the current status of Lithuania’s forensic system and opportunities for its improvement; 2) analysing achievements and knowledge in investigation of crimes listed in conclusions of EU Council on the vision for European Forensic Science 2020 including creation of a European Forensic Science Area and the development of forensic science infrastructure in Europe: trafficking in human beings, organised crime and terrorism; 3) analysing conceptions of criminalistics, which differ in different EU member states due to the variety of forensic schools, and finding means for their harmonization. Apart from the conception of implementation of the vision for European Forensic Science 2020 in Lithuania, the project is expected to suggest provisions that will be relevant to other EU countries as well. Consequently, the presented conception of implementation of vision for European Forensic Science 2020 in Lithuania could initiate a project for a common vision of European Forensic Science and contribute to the development of the EU as an area of freedom, security and justice. The article presents main ideas of the project of the conception of the vision for European Forensic Science 2020 of EU Council and analyses its legal background, as well as prospects of and challenges for its implementation in Lithuania and the EU. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=EUROVIFOR" title="EUROVIFOR">EUROVIFOR</a>, <a href="https://publications.waset.org/abstracts/search?q=standardization" title=" standardization"> standardization</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20for%20European%20Forensic%20Science%202020" title=" vision for European Forensic Science 2020"> vision for European Forensic Science 2020</a>, <a href="https://publications.waset.org/abstracts/search?q=Lithuania" title=" Lithuania"> Lithuania</a> </p> <a href="https://publications.waset.org/abstracts/7731/the-conception-of-implementation-of-vision-for-european-forensic-science-2020-in-lithuania" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7731.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">407</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1066</span> Texture Identification Using Vision System: A Method to Predict Functionality of a Component</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Varsha%20Singh">Varsha Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Shraddha%20Prajapati"> Shraddha Prajapati</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20B.%20Kiran"> M. B. Kiran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture identification is useful in predicting the functionality of a component. Many of the existing texture identification methods are of contact in nature, which limits its measuring speed. These contact measurement techniques use a diamond stylus and the diamond stylus being sharp going to damage the surface under inspection and hence these techniques can be used in statistical sampling. Though these contact methods are very accurate, they do not give complete information for full characterization of surface. In this context, the presented method assumes special significance. The method uses a relatively low cost vision system for image acquisition. Software is developed based on wavelet transform, for analyzing texture images. Specimens are made using different manufacturing process (shaping, grinding, milling etc.) During experimentation, the specimens are illuminated using proper lighting and texture images a capture using CCD camera connected to the vision system. The software installed in the vision system processes these images and subsequently identify the texture of manufacturing processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diamond%20stylus" title="diamond stylus">diamond stylus</a>, <a href="https://publications.waset.org/abstracts/search?q=manufacturing%20process" title=" manufacturing process"> manufacturing process</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20identification" title=" texture identification"> texture identification</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20system" title=" vision system"> vision system</a> </p> <a href="https://publications.waset.org/abstracts/61722/texture-identification-using-vision-system-a-method-to-predict-functionality-of-a-component" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61722.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1065</span> Examining the Significance of Service Learning in Driving the Purpose of a Rural-Based University in South Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Maphosa">C. Maphosa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ndileleni%20Mudzielwana"> Ndileleni Mudzielwana</a>, <a href="https://publications.waset.org/abstracts/search?q=Lufuno%20Phillip%20Netshifhefhe"> Lufuno Phillip Netshifhefhe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In line with established mission and vision, a university articulates its focus and purpose of existence. The conduct of business in a university should be for the furtherance of the mission and vision. Teaching and learning should play a pivotal role in driving the purpose of a university. In this paper, the researchers examine how service learning could be significant in driving the purpose of a rural-based university whose focus is to promote rural development. The importance of institutions’ vision and mission statement is explored and the vision and mission of the said university examined closely. The concept rural development and the contribution of a university in its promotion is discussed. Service learning as a teaching and learning approach is examined and its significance in driving the purpose of a rural-based university explained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=relevance" title="relevance">relevance</a>, <a href="https://publications.waset.org/abstracts/search?q=differentiation" title=" differentiation"> differentiation</a>, <a href="https://publications.waset.org/abstracts/search?q=purpose" title=" purpose"> purpose</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching" title=" teaching"> teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/52164/examining-the-significance-of-service-learning-in-driving-the-purpose-of-a-rural-based-university-in-south-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52164.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1064</span> A New and Simple Method of Plotting Binocular Single Vision Field (BSVF) using the Cervical Range of Motion - CROM - Device</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mihir%20Kothari">Mihir Kothari</a>, <a href="https://publications.waset.org/abstracts/search?q=Heena%20Khan"> Heena Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Vivek%20Rathod"> Vivek Rathod</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Assessment of binocular single vision field (BSVF) is traditionally done using a Goldmann perimeter. The measurement of BSVF is important for the management of incomitant strabismus, viz. orbital fractures, thyroid orbitopathy, oculomotor cranial nerve palsies, Duane syndrome etc. In this paper, we describe a new technique for measuring BSVF using a CROM device. Goldmann perimeter is very bulky and expensive (Euro 5000.00 or more) instrument which is 'almost' obsolete from the contemporary ophthalmology practice. Whereas, CROM can be easily made in the DIY (do it yourself) manner for the fraction of the price of the perimeter (only Euro 15.00). Moreover, CROM is useful for the accurate measurement of ocular torticollis vis. nystagmus, paralytic or incomitant squint etc, and it is highly portable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binocular%20single%20vision" title="binocular single vision">binocular single vision</a>, <a href="https://publications.waset.org/abstracts/search?q=perimetry" title=" perimetry"> perimetry</a>, <a href="https://publications.waset.org/abstracts/search?q=cervical%20rgen%20of%20motion" title=" cervical rgen of motion"> cervical rgen of motion</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20field" title=" visual field"> visual field</a>, <a href="https://publications.waset.org/abstracts/search?q=binocular%20single%20vision%20field" title=" binocular single vision field"> binocular single vision field</a> </p> <a href="https://publications.waset.org/abstracts/169775/a-new-and-simple-method-of-plotting-binocular-single-vision-field-bsvf-using-the-cervical-range-of-motion-crom-device" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169775.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1063</span> Facilitating Curriculum Access for Pupils with Vision Impairments: An Analysis of the Role of Specialist Teachers in England and Turkey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kubra%20Akbayrak">Kubra Akbayrak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In parallel with increasing inclusive practice for pupils with vision impairments, the role of specialist teachers who have specialized in the area of vision impairment has dramatically changed in recent years. This study, therefore, aims to provide a holistic perspective towards the distinctive role of specialist teachers of pupils with vision impairments in different educational settings (including mainstream settings, special school settings, etc.) in Turkey and England. Within the scope of the study, semi-structured interviews have been conducted with 17 specialist teachers in Turkey and 14 specialist teachers in England in order to reveal the perception of specialist teachers regarding their roles in different educational settings as well as their perception towards their pre-service training. As this study is a part of an ongoing PhD research, the qualitative data through semi-structured interviews will be analyzed through using Bronfenbrenner’s ecological systems theory as a theoretical framework in order to provide a holistic view regarding the role of specialist teachers particularly in facilitating curriculum access for pupils with vision impairments in England and Turkey. However, the initial findings broadly illustrate that specialist teachers who work in special school settings have different understanding regarding their roles compared to specialist teachers who work in mainstream settings in relation to promoting independence for pupils with vision impairments. The initial findings also imply that specialist teachers in England and Turkey have different perception about their roles in relation to providing specialist advice and guidance for families of pupils. With the completion of the analysis of the study, it is hoped that the findings will provide an insight into the role of specialist teachers in order to provide implication for programmes which prepare specialist teachers of pupils with vision impairments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curriculum%20access" title="curriculum access">curriculum access</a>, <a href="https://publications.waset.org/abstracts/search?q=pupils%20with%20vision%20impairments" title=" pupils with vision impairments"> pupils with vision impairments</a>, <a href="https://publications.waset.org/abstracts/search?q=specialist%20teachers" title=" specialist teachers"> specialist teachers</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20education" title=" special education"> special education</a> </p> <a href="https://publications.waset.org/abstracts/86753/facilitating-curriculum-access-for-pupils-with-vision-impairments-an-analysis-of-the-role-of-specialist-teachers-in-england-and-turkey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86753.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1062</span> Functional Vision of Older People with Cognitive Impairment Living in Galician Nursing Homes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20V%C3%A1zquez">C. Vázquez</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20M.%20Gigirey"> L. M. Gigirey</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20P.%20del%20Oro"> C. P. del Oro</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Seoane"> S. Seoane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Poor vision is common among older people, and several studies show connections between visual impairment and cognitive function. 15 older adult live in Galician Government nursing homes, and cognitive decline is one of the main reasons of admission. Objectives: (1) To evaluate functional far and near vision of older people with cognitive impairment. (2) To determine connections between visual and cognitive state of “our” residents. Methodology: A total of 364 older adults (aged 65 years or more) underwent a visual and cognitive screening. We tested presenting visual acuity (binocular visual acuity with habitual correction if warn) for distance and near vision (E-Snellen, usual working distance for near vision). Binocular presenting visual acuity less than 0.3 was used as cut point for diagnosis of visual impairment. Exclusion criteria included immobilized residents unable to reach the USC Dual Sensory Loss Unit for visual screening. To screen cognition we employed the mini-mental examination test (Spanish version). Analysis of categorical variables was performed using chi-square tests. We utilized Pearson and Spearman correlation tests and the variance analysis to determine differences between groups of interest (SPSS 19.0 version). Results: the percentage of residents with cognitive decline reaches 32.2% Prevalence of visual impairment for distance and near vision increases among those subjects with cognitive impairment respect those with normal cognition. Shift correlation exists between distance visual acuity and mini-mental test (age and sex controlled), and moderate association was found in case of near vision (p<0.01). Conclusion: First results shows that people with cognitive impairment have poor functional distance and near vision than those with normal cognition. Next step will be to analyse the individual contribution of distance and near vision loss on cognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20impairment" title="visual impairment">visual impairment</a>, <a href="https://publications.waset.org/abstracts/search?q=cognition" title=" cognition"> cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=aging" title=" aging"> aging</a>, <a href="https://publications.waset.org/abstracts/search?q=nursing%20homes" title=" nursing homes"> nursing homes</a> </p> <a href="https://publications.waset.org/abstracts/17992/functional-vision-of-older-people-with-cognitive-impairment-living-in-galician-nursing-homes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">428</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1061</span> The Corporate Vision Effect on Rajabhat University Brand Building in Thailand</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pisit%20Potjanajaruwit">Pisit Potjanajaruwit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to (1) investigate the corporate vision factor influencing Rajabhat University brand building in Thailand and (2) explore influences of brand building upon Rajabhat University stakeholders&rsquo; loyalty, and the research method will use mixed methods to conduct qualitative research with the quantitative research. The qualitative will approach by Indebt-interview the executive of Rathanagosin Rajabhat University group for 6 key informants and the quantitative data was collected by questionnaires distributed to stakeholder including instructors, staff, students and parents of the Rathanagosin Rajabhat University group for 400 sampling were selected by multi-stage sampling method. Data was analyzed by Structural Equation Modeling: SEM and also provide the focus group interview for confirming the model. Findings corporate vision had a direct and positive influence on Rajabhat University brand building were showed direct and positive influence on stakeholder&rsquo;s loyalty and stakeholder&rsquo;s loyalty was indirectly influenced by corporate vision through Rajabhat University brand building. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brand%20building" title="brand building">brand building</a>, <a href="https://publications.waset.org/abstracts/search?q=corporate%20vision" title=" corporate vision"> corporate vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajabhat%20University" title=" Rajabhat University"> Rajabhat University</a>, <a href="https://publications.waset.org/abstracts/search?q=stakeholder%E2%80%98s%20loyalty" title=" stakeholder‘s loyalty"> stakeholder‘s loyalty</a> </p> <a href="https://publications.waset.org/abstracts/39940/the-corporate-vision-effect-on-rajabhat-university-brand-building-in-thailand" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1060</span> 3D Biomechanics Analysis of Tennis Elbow Factors &amp; Injury Prevention Using Computer Vision and AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aaron%20Yan">Aaron Yan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tennis elbow has been a leading injury and problem among amateur and even professional players. Many factors contribute to tennis elbow. In this research, we apply state of the art sensor-less computer vision and AI technology to study the biomechanics of a player’s tennis movements during training and competition as they relate to the causes of tennis elbow. We provide a framework for the analysis of key biomechanical parameters and their correlations with specific tennis stroke and movements that can lead to tennis elbow or elbow injury. We also devise a method for using AI to automatically detect player’s forms that can lead to tennis elbow development for on-court injury prevention. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tennis%20Elbow" title="Tennis Elbow">Tennis Elbow</a>, <a href="https://publications.waset.org/abstracts/search?q=Computer%20Vision" title=" Computer Vision"> Computer Vision</a>, <a href="https://publications.waset.org/abstracts/search?q=AI" title=" AI"> AI</a>, <a href="https://publications.waset.org/abstracts/search?q=3DAT" title=" 3DAT"> 3DAT</a> </p> <a href="https://publications.waset.org/abstracts/176414/3d-biomechanics-analysis-of-tennis-elbow-factors-injury-prevention-using-computer-vision-and-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176414.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">46</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1059</span> Analysis of Public Space Usage Characteristics Based on Computer Vision Technology - Taking Shaping Park as an Example</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guantao%20Bai">Guantao Bai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Public space is an indispensable and important component of the urban built environment. How to more accurately evaluate the usage characteristics of public space can help improve its spatial quality. Compared to traditional survey methods, computer vision technology based on deep learning has advantages such as dynamic observation and low cost. This study takes the public space of Shaping Park as an example and, based on deep learning computer vision technology, processes and analyzes the image data of the public space to obtain the spatial usage characteristics and spatiotemporal characteristics of the public space. Research has found that the spontaneous activity time in public spaces is relatively random with a relatively short average activity time, while social activities have a relatively stable activity time with a longer average activity time. Computer vision technology based on deep learning can effectively describe the spatial usage characteristics of the research area, making up for the shortcomings of traditional research methods and providing relevant support for creating a good public space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20spaces" title=" public spaces"> public spaces</a>, <a href="https://publications.waset.org/abstracts/search?q=using%20features" title=" using features"> using features</a> </p> <a href="https://publications.waset.org/abstracts/173323/analysis-of-public-space-usage-characteristics-based-on-computer-vision-technology-taking-shaping-park-as-an-example" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1058</span> An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jie%20Zhao">Jie Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Meng%20Su"> Meng Su</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alexNet" title="alexNet">alexNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a>, <a href="https://publications.waset.org/abstracts/search?q=googleNet" title=" googleNet"> googleNet</a>, <a href="https://publications.waset.org/abstracts/search?q=resNet" title=" resNet"> resNet</a>, <a href="https://publications.waset.org/abstracts/search?q=Jetson%20nano" title=" Jetson nano"> Jetson nano</a>, <a href="https://publications.waset.org/abstracts/search?q=CUDA" title=" CUDA"> CUDA</a>, <a href="https://publications.waset.org/abstracts/search?q=COCO-NET" title=" COCO-NET"> COCO-NET</a>, <a href="https://publications.waset.org/abstracts/search?q=cifar10" title=" cifar10"> cifar10</a>, <a href="https://publications.waset.org/abstracts/search?q=imageNet%20large%20scale%20visual%20recognition%20challenge%20%28ILSVRC%29" title=" imageNet large scale visual recognition challenge (ILSVRC)"> imageNet large scale visual recognition challenge (ILSVRC)</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20colab" title=" google colab"> google colab</a> </p> <a href="https://publications.waset.org/abstracts/176759/an-evaluation-of-neural-network-efficacies-for-image-recognition-on-edge-ai-computer-vision-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1057</span> Influence of Peripheral Vision Restrictions on the Walking Trajectory When Texting While Walking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Macky%20Kato">Macky Kato</a>, <a href="https://publications.waset.org/abstracts/search?q=Takeshi%20Sato"> Takeshi Sato</a>, <a href="https://publications.waset.org/abstracts/search?q=Mizuki%20Nakajima"> Mizuki Nakajima</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One major problem related to the use of smartphones is texting while simultaneously engaging in other things, resulting in serious road accidents. Apart from texting while driving being one of the most dangerous behaviors, texting while walking is also dangerous because it narrows the pedestrians’ field of vision. However, many of pedestrian text while walking very habitually. Smartphone users often overlook the potential harm associated with this behavior even while crossing roads. The successful texting while walking make them think that they are safe. The purpose of this study is to reveal of the influence of peripheral vision to the stability of walking trajectory with texting while walking. In total, 9 healthy male university students participated in the experiment. Their mean age was 21.4 years, and standard deviation was 0.7 years. They attempted to walk 10 m in three conditions. First one is the control (CTR) condition, with no phone and no restriction. The second one is the texting while walking (TWG) with no restrictions. The third one is restriction condition (PRS), with phone restricted by experimental peripheral goggles. The horizontal distances (HDS) and directions are measured as the scale of horizontal stability. The longitudinal distances (LDS) between the footprints were measured as the scale of the walking rhythm. The results showed that the HDS of the footprints from the straight line increased as the participants walked in the TWG and PRS conditions. In the PRS condition, this tendency was particularly remarkable. In addition, the LDS between the footprints decreased in the order of the CTR, TWG, and PRS conditions. The ANOVA results showed significant differences in the three conditions with respect to HDS. The differences among these conditions showed that the narrowing of the Pedestrian's vision because of smartphone use influences the walking trajectory and rhythm. It can be said that the pedestrians seem to use their peripheral vision marginally on texting while walking. Therefore, we concluded that the texting while walking narrows the peripheral vision so danger to increase the risk of the accidents. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=peripheral%20vision" title="peripheral vision">peripheral vision</a>, <a href="https://publications.waset.org/abstracts/search?q=stability" title=" stability"> stability</a>, <a href="https://publications.waset.org/abstracts/search?q=texting%20while%20walking" title=" texting while walking"> texting while walking</a>, <a href="https://publications.waset.org/abstracts/search?q=walking%20trajectory" title=" walking trajectory"> walking trajectory</a> </p> <a href="https://publications.waset.org/abstracts/77017/influence-of-peripheral-vision-restrictions-on-the-walking-trajectory-when-texting-while-walking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">257</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1056</span> Essentiality of Core Strategic Vision in Continuous Cost Reduction Management</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lai%20Ving%20Kam">Lai Ving Kam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many markets are maturing, consumer buying powers are weakening and customer preferences change rapidly. To survive, many adopt fast paced continuous cost reduction and competitive pricing to remain relevance. Marketers desire to push for more sales to increase revenues have intensified competitions at time cannibalize the product and market. The amazing technologies changes have created both hope and despair to the industries. The pressure to constantly reduce cost, on the one hand, create and market new products in cheaper prices and shorter life cycles, on the other has become a continuous endeavour. The twin trends appear irreconcilable. Can core strategic vision provides and adapts new directions in continuous cost reduction? This study investigates core strategic vision able to meet this need, for firms to survive and stay profitable. Under current uncertainty market, are firms falling back on their core strategic visions to take them out of the unfavourable positions? <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=core%20strategy%20vision" title="core strategy vision">core strategy vision</a>, <a href="https://publications.waset.org/abstracts/search?q=continuous%20cost%20reduction" title=" continuous cost reduction"> continuous cost reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=fashionable%20products%20industry" title=" fashionable products industry"> fashionable products industry</a>, <a href="https://publications.waset.org/abstracts/search?q=competitive%20pricing" title=" competitive pricing"> competitive pricing</a> </p> <a href="https://publications.waset.org/abstracts/77999/essentiality-of-core-strategic-vision-in-continuous-cost-reduction-management" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77999.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1055</span> Development of Agricultural Robotic Platform for Inter-Row Plant: An Autonomous Navigation Based on Machine Vision </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alaa%20El-Din%20Rezk">Alaa El-Din Rezk </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Egypt, management of crops still away from what is being used today by utilizing the advances of mechanical design capabilities, sensing and electronics technology. These technologies have been introduced in many places and recorm, for Straight Path, Curved Path, Sine Wave ded high accuracy in different field operations. So, an autonomous robotic platform based on machine vision has been developed and constructed to be implemented in Egyptian conditions as self-propelled mobile vehicle for carrying tools for inter/intra-row crop management based on different control modules. The experiments were carried out at plant protection research institute (PPRI) during 2014-2015 to optimize the accuracy of agricultural robotic platform control using machine vision in term of the autonomous navigation and performance of the robot’s guidance system. Results showed that the robotic platform' guidance system with machine vision was able to adequately distinguish the path and resisted image noise and did better than human operators for getting less lateral offset error. The average error of autonomous was 2.75, 19.33, 21.22, 34.18, and 16.69 mm. while the human operator was 32.70, 4.85, 7.85, 38.35 and 14.75 mm Path, Offset Discontinuity and Angle Discontinuity respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20robotic" title="autonomous robotic">autonomous robotic</a>, <a href="https://publications.waset.org/abstracts/search?q=Hough%20transform" title=" Hough transform"> Hough transform</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision "> machine vision </a> </p> <a href="https://publications.waset.org/abstracts/43565/development-of-agricultural-robotic-platform-for-inter-row-plant-an-autonomous-navigation-based-on-machine-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43565.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1054</span> Framework for Socio-Technical Issues in Requirements Engineering for Developing Resilient Machine Vision Systems Using Levels of Automation through the Lifecycle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ryan%20Messina">Ryan Messina</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehedi%20Hasan"> Mehedi Hasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research is to examine the impacts of using data to generate performance requirements for automation in visual inspections using machine vision. These situations are intended for design and how projects can smooth the transfer of tacit knowledge to using an algorithm. We have proposed a framework when specifying machine vision systems. This framework utilizes varying levels of automation as contingency planning to reduce data processing complexity. Using data assists in extracting tacit knowledge from those who can perform the manual tasks to assist design the system; this means that real data from the system is always referenced and minimizes errors between participating parties. We propose using three indicators to know if the project has a high risk of failing to meet requirements related to accuracy and reliability. All systems tested achieved a better integration into operations after applying the framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automation" title="automation">automation</a>, <a href="https://publications.waset.org/abstracts/search?q=contingency%20planning" title=" contingency planning"> contingency planning</a>, <a href="https://publications.waset.org/abstracts/search?q=continuous%20engineering" title=" continuous engineering"> continuous engineering</a>, <a href="https://publications.waset.org/abstracts/search?q=control%20theory" title=" control theory"> control theory</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20requirements" title=" system requirements"> system requirements</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20thinking" title=" system thinking"> system thinking</a> </p> <a href="https://publications.waset.org/abstracts/97643/framework-for-socio-technical-issues-in-requirements-engineering-for-developing-resilient-machine-vision-systems-using-levels-of-automation-through-the-lifecycle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97643.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">204</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=36">36</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=37">37</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vision%20impairement&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10