CINXE.COM
Search results for: computer vision
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: computer vision</title> <meta name="description" content="Search results for: computer vision"> <meta name="keywords" content="computer vision"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="computer vision" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="computer vision"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3156</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: computer vision</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3156</span> The Role of Synthetic Data in Aerial Object Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ava%20Dodd">Ava Dodd</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Adams"> Jonathan Adams</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools, and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represents another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data" title=" synthetic data"> synthetic data</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv4" title=" YOLOv4"> YOLOv4</a> </p> <a href="https://publications.waset.org/abstracts/139194/the-role-of-synthetic-data-in-aerial-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139194.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">225</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3155</span> A Review: Detection and Classification Defects on Banana and Apples by Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zahow%20Muoftah">Zahow Muoftah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditional manual visual grading of fruits has been one of the agricultural industry’s major challenges due to its laborious nature as well as inconsistency in the inspection and classification process. The main requirements for computer vision and visual processing are some effective techniques for identifying defects and estimating defect areas. Automated defect detection using computer vision and machine learning has emerged as a promising area of research with a high and direct impact on the visual inspection domain. Grading, sorting, and disease detection are important factors in determining the quality of fruits after harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have been conducted to identify diseases and pests that affect the fruits of agricultural crops. However, most previous studies concentrated solely on the diagnosis of a lesion or disease. This study focused on a comprehensive study to identify pests and diseases of apple and banana fruits using detection and classification defects on Banana and Apples by Computer Vision. As a result, the current article includes research from these domains as well. Finally, various pattern recognition techniques for detecting apple and banana defects are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=banana" title=" banana"> banana</a>, <a href="https://publications.waset.org/abstracts/search?q=apple" title=" apple"> apple</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/154514/a-review-detection-and-classification-defects-on-banana-and-apples-by-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154514.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">106</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3154</span> Human Motion Capture: New Innovations in the Field of Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Najm%20Alotaibi">Najm Alotaibi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20capture" title="human motion capture">human motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision-based" title=" vision-based"> vision-based</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/22770/human-motion-capture-new-innovations-in-the-field-of-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22770.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3153</span> Development of a Computer Vision System for the Blind and Visually Impaired Person</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20C.%20Belleza">Rodrigo C. Belleza</a>, <a href="https://publications.waset.org/abstracts/search?q=Jr."> Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Roselyn%20A.%20Maa%C3%B1o"> Roselyn A. Maaño</a>, <a href="https://publications.waset.org/abstracts/search?q=Karl%20Patrick%20E.%20Camota"> Karl Patrick E. Camota</a>, <a href="https://publications.waset.org/abstracts/search?q=Darwin%20Kim%20Q.%20Bulawan"> Darwin Kim Q. Bulawan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may result from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithms" title="algorithms">algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20systems" title=" embedded systems"> embedded systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a> </p> <a href="https://publications.waset.org/abstracts/2016/development-of-a-computer-vision-system-for-the-blind-and-visually-impaired-person" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3152</span> 3D Biomechanics Analysis of Tennis Elbow Factors & Injury Prevention Using Computer Vision and AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aaron%20Yan">Aaron Yan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tennis elbow has been a leading injury and problem among amateur and even professional players. Many factors contribute to tennis elbow. In this research, we apply state of the art sensor-less computer vision and AI technology to study the biomechanics of a player’s tennis movements during training and competition as they relate to the causes of tennis elbow. We provide a framework for the analysis of key biomechanical parameters and their correlations with specific tennis stroke and movements that can lead to tennis elbow or elbow injury. We also devise a method for using AI to automatically detect player’s forms that can lead to tennis elbow development for on-court injury prevention. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tennis%20Elbow" title="Tennis Elbow">Tennis Elbow</a>, <a href="https://publications.waset.org/abstracts/search?q=Computer%20Vision" title=" Computer Vision"> Computer Vision</a>, <a href="https://publications.waset.org/abstracts/search?q=AI" title=" AI"> AI</a>, <a href="https://publications.waset.org/abstracts/search?q=3DAT" title=" 3DAT"> 3DAT</a> </p> <a href="https://publications.waset.org/abstracts/176414/3d-biomechanics-analysis-of-tennis-elbow-factors-injury-prevention-using-computer-vision-and-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176414.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">46</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3151</span> Multichannel Object Detection with Event Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Iliasov">Rafael Iliasov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Golkar"> Alessandro Golkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection based on event vision has been a dynamically growing field in computer vision for the last 16 years. In this work, we create multiple channels from a single event camera and propose an event fusion method (EFM) to enhance object detection in event-based vision systems. Each channel uses a different accumulation buffer to collect events from the event camera. We implement YOLOv7 for object detection, followed by a fusion algorithm. Our multichannel approach outperforms single-channel-based object detection by 0.7% in mean Average Precision (mAP) for detection overlapping ground truth with IOU = 0.5. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20camera" title="event camera">event camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20with%20multimodal%20inputs" title=" object detection with multimodal inputs"> object detection with multimodal inputs</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20fusion" title=" multichannel fusion"> multichannel fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/190247/multichannel-object-detection-with-event-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3150</span> Analysis of Public Space Usage Characteristics Based on Computer Vision Technology - Taking Shaping Park as an Example</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guantao%20Bai">Guantao Bai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Public space is an indispensable and important component of the urban built environment. How to more accurately evaluate the usage characteristics of public space can help improve its spatial quality. Compared to traditional survey methods, computer vision technology based on deep learning has advantages such as dynamic observation and low cost. This study takes the public space of Shaping Park as an example and, based on deep learning computer vision technology, processes and analyzes the image data of the public space to obtain the spatial usage characteristics and spatiotemporal characteristics of the public space. Research has found that the spontaneous activity time in public spaces is relatively random with a relatively short average activity time, while social activities have a relatively stable activity time with a longer average activity time. Computer vision technology based on deep learning can effectively describe the spatial usage characteristics of the research area, making up for the shortcomings of traditional research methods and providing relevant support for creating a good public space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20spaces" title=" public spaces"> public spaces</a>, <a href="https://publications.waset.org/abstracts/search?q=using%20features" title=" using features"> using features</a> </p> <a href="https://publications.waset.org/abstracts/173323/analysis-of-public-space-usage-characteristics-based-on-computer-vision-technology-taking-shaping-park-as-an-example" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3149</span> An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jie%20Zhao">Jie Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Meng%20Su"> Meng Su</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alexNet" title="alexNet">alexNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a>, <a href="https://publications.waset.org/abstracts/search?q=googleNet" title=" googleNet"> googleNet</a>, <a href="https://publications.waset.org/abstracts/search?q=resNet" title=" resNet"> resNet</a>, <a href="https://publications.waset.org/abstracts/search?q=Jetson%20nano" title=" Jetson nano"> Jetson nano</a>, <a href="https://publications.waset.org/abstracts/search?q=CUDA" title=" CUDA"> CUDA</a>, <a href="https://publications.waset.org/abstracts/search?q=COCO-NET" title=" COCO-NET"> COCO-NET</a>, <a href="https://publications.waset.org/abstracts/search?q=cifar10" title=" cifar10"> cifar10</a>, <a href="https://publications.waset.org/abstracts/search?q=imageNet%20large%20scale%20visual%20recognition%20challenge%20%28ILSVRC%29" title=" imageNet large scale visual recognition challenge (ILSVRC)"> imageNet large scale visual recognition challenge (ILSVRC)</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20colab" title=" google colab"> google colab</a> </p> <a href="https://publications.waset.org/abstracts/176759/an-evaluation-of-neural-network-efficacies-for-image-recognition-on-edge-ai-computer-vision-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3148</span> Performance Analysis of Vision-Based Transparent Obstacle Avoidance for Construction Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siwei%20Chang">Siwei Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Heng%20Li"> Heng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Haitao%20Wu"> Haitao Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Fang"> Xin Fang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Construction robots are receiving more and more attention as a promising solution to the manpower shortage issue in the construction industry. The development of intelligent control techniques that assist in controlling the robots to avoid transparency and reflected building obstacles is crucial for guaranteeing the adaptability and flexibility of mobile construction robots in complex construction environments. With the boom of computer vision techniques, a number of studies have proposed vision-based methods for transparent obstacle avoidance to improve operation accuracy. However, vision-based methods are also associated with disadvantages such as high computational costs. To provide better perception and value evaluation, this study aims to analyze the performance of vision-based techniques for avoiding transparent building obstacles. To achieve this, commonly used sensors, including a lidar, an ultrasonic sensor, and a USB camera, are equipped on the robotic platform to detect obstacles. A Raspberry Pi 3 computer board is employed to compute data collecting and control algorithms. The turtlebot3 burger is employed to test the programs. On-site experiments are carried out to observe the performance in terms of success rate and detection distance. Control variables include obstacle shapes and environmental conditions. The findings contribute to demonstrating how effectively vision-based obstacle avoidance strategies for transparent building obstacle avoidance and provide insights and informed knowledge when introducing computer vision techniques in the aforementioned domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=construction%20robot" title="construction robot">construction robot</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20avoidance" title=" obstacle avoidance"> obstacle avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=transparent%20obstacle" title=" transparent obstacle"> transparent obstacle</a> </p> <a href="https://publications.waset.org/abstracts/165433/performance-analysis-of-vision-based-transparent-obstacle-avoidance-for-construction-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165433.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3147</span> Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omair%20Ghori">Omair Ghori</a>, <a href="https://publications.waset.org/abstracts/search?q=Anton%20Stadler"> Anton Stadler</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Wilk"> Stefan Wilk</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20Effelsberg"> Wolfgang Effelsberg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20analysis" title="contrast analysis">contrast analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20fire%20detection" title=" early fire detection"> early fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20smoke%20detection" title=" video smoke detection"> video smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/52006/video-based-ambient-smoke-detection-by-detecting-directional-contrast-decrease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3146</span> Gesture-Controlled Interface Using Computer Vision and Python</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vedant%20Vardhan%20Rathour">Vedant Vardhan Rathour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anant%20Agrawal"> Anant Agrawal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/193844/gesture-controlled-interface-using-computer-vision-and-python" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3145</span> Inspection of Railway Track Fastening Elements Using Artificial Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelkrim%20Belhaoua">Abdelkrim Belhaoua</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Pierre%20Radoux"> Jean-Pierre Radoux</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In France, the railway network is one of the main transport infrastructures and is the second largest European network. Therefore, railway inspection is an important task in railway maintenance to ensure safety for passengers using significant means in personal and technical facilities. Artificial vision has recently been applied to several railway applications due to its potential to improve the efficiency and accuracy when analyzing large databases of acquired images. In this paper, we present a vision system able to detect fastening elements based on artificial vision approach. This system acquires railway images using a CCD camera installed under a control carriage. These images are stitched together before having processed. Experimental results are presented to show that the proposed method is robust for detection fasteners in a complex environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=railway%20inspection" title=" railway inspection"> railway inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20stitching" title=" image stitching"> image stitching</a>, <a href="https://publications.waset.org/abstracts/search?q=fastener%20recognition" title=" fastener recognition"> fastener recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/38749/inspection-of-railway-track-fastening-elements-using-artificial-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38749.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">453</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3144</span> Shoulder Range of Motion Measurements using Computer Vision Compared to Hand-Held Goniometric Measurements</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lakshmi%20Sujeesh">Lakshmi Sujeesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Aaron%20Ramzeen"> Aaron Ramzeen</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricky%20Ziming%20Guo"> Ricky Ziming Guo</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhishek%20Agrawal"> Abhishek Agrawal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Range of motion (ROM) is often measured by physiotherapists using hand-held goniometer as part of mobility assessment for diagnosis. Due to the nature of hand-held goniometer measurement procedure, readings often tend to have some variations depending on the physical therapist taking the measurements (Riddle et al.). This study aims to validate computer vision software readings against goniometric measurements for quick and consistent ROM measurements to be taken by clinicians. The use of this computer vision software hopes to improve the future of musculoskeletal space with more efficient diagnosis from recording of patient’s ROM with minimal human error across different physical therapists. Methods: Using the hand-held long arm goniometer measurements as the “gold-standard”, healthy study participants (n = 20) were made to perform 4 exercises: Front elevation, Abduction, Internal Rotation, and External Rotation, using both arms. Assessment of active ROM using computer vision software at different angles set by goniometer for each exercise was done. Interclass Correlation Coefficient (ICC) using 2-way random effects model, Box-Whisker plots, and Root Mean Square error (RMSE) were used to find the degree of correlation and absolute error measured between set and recorded angles across the repeated trials by the same rater. Results: ICC (2,1) values for all 4 exercises are above 0.9, indicating excellent reliability. Lowest overall RMSE was for external rotation (5.67°) and highest for front elevation (8.00°). Box-whisker plots showed have showed that there is a potential zero error in the measurements done by the computer vision software for abduction, where absolute error for measurements taken at 0 degree are shifted away from the ideal 0 line, with its lowest recorded error being 8°. Conclusion: Our results indicate that the use of computer vision software is valid and reliable to use in clinical settings by physiotherapists for measuring shoulder ROM. Overall, computer vision helps improve accessibility to quality care provided for individual patients, with the ability to assess ROM for their condition at home throughout a full cycle of musculoskeletal care (American Academy of Orthopaedic Surgeons) without the need for a trained therapist. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=physiotherapy" title="physiotherapy">physiotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=frozen%20shoulder" title=" frozen shoulder"> frozen shoulder</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20range%20of%20motion" title=" joint range of motion"> joint range of motion</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/164545/shoulder-range-of-motion-measurements-using-computer-vision-compared-to-hand-held-goniometric-measurements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164545.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3143</span> Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bandar%20Alahmadi">Bandar Alahmadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Manohar%20Mareboyana"> Manohar Mareboyana</a>, <a href="https://publications.waset.org/abstracts/search?q=Lethia%20Jackson"> Lethia Jackson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adversarial%20examples" title="adversarial examples">adversarial examples</a>, <a href="https://publications.waset.org/abstracts/search?q=attack" title=" attack"> attack</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/103308/non-targeted-adversarial-object-detection-attack-fast-gradient-sign-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/103308.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">193</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3142</span> Image Processing techniques for Surveillance in Outdoor Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jayanth%20C.">Jayanth C.</a>, <a href="https://publications.waset.org/abstracts/search?q=Anirudh%20Sai%20Yetikuri"> Anirudh Sai Yetikuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Kavitha%20S.%20N."> Kavitha S. N.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper explores the development and application of computer vision and machine learning techniques for real-time pose detection, facial recognition, and number plate extraction. Utilizing MediaPipe for pose estimation, the research presents methods for detecting hand raises and ducking postures through real-time video analysis. Complementarily, facial recognition is employed to compare and verify individual identities using the face recognition library. Additionally, the paper demonstrates a robust approach for extracting and storing vehicle number plates from images, integrating Optical Character Recognition (OCR) with a database management system. The study highlights the effectiveness and versatility of these technologies in practical scenarios, including security and surveillance applications. The findings underscore the potential of combining computer vision techniques to address diverse challenges and enhance automated systems for both individual and vehicular identification. This research contributes to the fields of computer vision and machine learning by providing scalable solutions and demonstrating their applicability in real-world contexts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20detection" title=" pose detection"> pose detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=number%20plate%20extraction" title=" number plate extraction"> number plate extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20analysis" title=" real-time analysis"> real-time analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR" title=" OCR"> OCR</a>, <a href="https://publications.waset.org/abstracts/search?q=database%20management" title=" database management"> database management</a> </p> <a href="https://publications.waset.org/abstracts/191153/image-processing-techniques-for-surveillance-in-outdoor-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191153.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3141</span> Visual Improvement with Low Vision Aids in Children with Stargardt’s Disease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anum%20Akhter">Anum Akhter</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Altaf"> Sumaira Altaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the effect of low vision devices i.e. telescope and magnifying glasses on distance visual acuity and near visual acuity of children with Stargardt’s disease. Setting: Low vision department, Alshifa Trust Eye Hospital, Rawalpindi, Pakistan. Methods: 52 children having Stargardt’s disease were included in the study. All children were diagnosed by pediatrics ophthalmologists. Comprehensive low vision assessment was done by me in Low vision clinic. Visual acuity was measured using ETDRS chart. Refraction and other supplementary tests were performed. Children with Stargardt’s disease were provided with different telescopes and magnifying glasses for improving far vision and near vision. Results: Out of 52 children, 17 children were males and 35 children were females. Distance visual acuity and near visual acuity improved significantly with low vision aid trial. All children showed visual acuity better than 6/19 with a telescope of higher magnification. Improvement in near visual acuity was also significant with magnifying glasses trial. Conclusions: Low vision aids are useful for improvement in visual acuity in children. Children with Stargardt’s disease who are having a problem in education and daily life activities can get help from low vision aids. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stargardt" title="Stargardt">Stargardt</a>, <a href="https://publications.waset.org/abstracts/search?q=s%20disease" title="s disease">s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision%20aids" title=" low vision aids"> low vision aids</a>, <a href="https://publications.waset.org/abstracts/search?q=telescope" title=" telescope"> telescope</a>, <a href="https://publications.waset.org/abstracts/search?q=magnifiers" title=" magnifiers"> magnifiers</a> </p> <a href="https://publications.waset.org/abstracts/24382/visual-improvement-with-low-vision-aids-in-children-with-stargardts-disease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3140</span> Design of a Computer Vision Based Exercise Video Game for Senior Citizens</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=June%20Tay">June Tay</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivy%20Chia"> Ivy Chia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are numerous changes, both mental and physical, taking place when people age. We need to understand the different aspects required for healthy living, including meeting nutritional needs, regular physical activities to keep agility, sufficient rest and sleep to have physical and mental well-being, social engagement to avoid the risk of social isolation and depression, and access to healthcare to detect and manage chronic conditions. Promoting physical activities for an ageing population is necessary as many may have enjoyed sedentary lifestyles for some time. In our study, we evaluate the considerations when designing a computer vision video game for the elderly. We need to design some low-impact activities, such as stretching and gentle movements, because some elderly individuals may have joint pains or mobility issues. The exercise game should consist of simple movements that are easy to follow and remember. It should be fun and enjoyable so that they can be motivated to do some exercise. Social engagement can keep the elderly motivated and competitive, and they are more willing to engage in game exercises. Elderly citizens can compare their game scores and try to improve them. We propose a computer vision-based video game for the elderly that will capture and track the movement of the elderly hand pushing a ball on the screen into a circle. It can be easily set up using a PC laptop with a webcam. Our video game adhered to the design framework we employed, and it encompassed ease of use, a simple graphical interface, easy-to-play game exercise, and fun gameplay. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=about%20computer%20vision" title="about computer vision">about computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=gerontology%20technology" title=" gerontology technology"> gerontology technology</a>, <a href="https://publications.waset.org/abstracts/search?q=caregiving" title=" caregiving"> caregiving</a> </p> <a href="https://publications.waset.org/abstracts/166604/design-of-a-computer-vision-based-exercise-video-game-for-senior-citizens" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3139</span> Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Cesar%20Pereira%20Junior">Paulo Cesar Pereira Junior</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20Monteiro"> Alexandre Monteiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20da%20Luz%20Ribeiro"> Rafael da Luz Ribeiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20Carlos%20Sobieranski"> Antonio Carlos Sobieranski</a>, <a href="https://publications.waset.org/abstracts/search?q=Aldo%20von%20Wangenheim"> Aldo von Wangenheim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20processing" title=" digital image processing"> digital image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicles" title=" unmanned aerial vehicles"> unmanned aerial vehicles</a> </p> <a href="https://publications.waset.org/abstracts/112982/comparison-of-classical-computer-vision-vs-convolutional-neural-networks-approaches-for-weed-mapping-in-aerial-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3138</span> The Effects of Prolonged Social Media Use on Student Health: A Focus on Computer Vision Syndrome, Hand Pain, and Headaches and Mental Status</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Augustine%20Ndudi%20Egere">Augustine Ndudi Egere</a>, <a href="https://publications.waset.org/abstracts/search?q=Shehu%20Adamu"> Shehu Adamu</a>, <a href="https://publications.waset.org/abstracts/search?q=Esther%20Ishaya%20Solomon"> Esther Ishaya Solomon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As internet accessibility and smartphones continue to increase in Nigeria, Africa’s most populous country, social media platforms have become ubiquitous, causing students of 18-25 age brackets to spend more time on social media. The research investigated the impact of prolonged social media use on the physical health of students, with a specific focus on computer vision syndrome, hand pain, headaches and mental status. The study adopted a mixed-methods approach combining quantitative surveys to gather statistical data on usage patterns and symptoms, along with qualitative interviews into the experiences and perceptions of medical practitioners concerning cases under study within the geopolitical region. The result was analyzed using Regression analysis. It was observed that there is a significant correlation between social media usage by the students in the study age bracket concerning computer vision syndrome, hand pain, headache and general mental status. The research concluded by providing valuable insights into potential interventions and strategies to mitigate the adverse effects of excessive social media use on student well-being and recommends, among others, that educational institutions, parents, and students themselves collaborate to implement strategies aimed at promoting responsible and balanced use of social media. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=social%20media" title="social media">social media</a>, <a href="https://publications.waset.org/abstracts/search?q=student%20health" title=" student health"> student health</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision%20syndrome" title=" computer vision syndrome"> computer vision syndrome</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20pain" title=" hand pain"> hand pain</a>, <a href="https://publications.waset.org/abstracts/search?q=headaches" title=" headaches"> headaches</a>, <a href="https://publications.waset.org/abstracts/search?q=mental%20staus" title=" mental staus"> mental staus</a> </p> <a href="https://publications.waset.org/abstracts/185876/the-effects-of-prolonged-social-media-use-on-student-health-a-focus-on-computer-vision-syndrome-hand-pain-and-headaches-and-mental-status" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185876.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">45</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3137</span> Comparative Analysis of Feature Extraction and Classification Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20L.%20Ujjwal">R. L. Ujjwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhishek%20Jain"> Abhishek Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of computer vision, most facial variations such as identity, expression, emotions and gender have been extensively studied. Automatic age estimation has been rarely explored. With age progression of a human, the features of the face changes. This paper is providing a new comparable study of different type of algorithm to feature extraction [Hybrid features using HAAR cascade & HOG features] & classification [KNN & SVM] training dataset. By using these algorithms we are trying to find out one of the best classification algorithms. Same thing we have done on the feature selection part, we extract the feature by using HAAR cascade and HOG. This work will be done in context of age group classification model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=age%20group" title=" age group"> age group</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a> </p> <a href="https://publications.waset.org/abstracts/58670/comparative-analysis-of-feature-extraction-and-classification-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3136</span> Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20C.%20Chen">Joseph C. Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DMAIC" title="DMAIC">DMAIC</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision%20system" title=" machine vision system"> machine vision system</a>, <a href="https://publications.waset.org/abstracts/search?q=process%20capability" title=" process capability"> process capability</a>, <a href="https://publications.waset.org/abstracts/search?q=Taguchi%20Parameter%20Design" title=" Taguchi Parameter Design"> Taguchi Parameter Design</a> </p> <a href="https://publications.waset.org/abstracts/68243/optimizing-machine-vision-system-setup-accuracy-by-six-sigma-dmaic-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3135</span> Analysis of Histogram Asymmetry for Waste Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janusz%20Bobulski">Janusz Bobulski</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamila%20Pasternak"> Kamila Pasternak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Despite many years of effort and research, the problem of waste management is still current. So far, no fully effective waste management system has been developed. Many programs and projects improve statistics on the percentage of waste recycled every year. In these efforts, it is worth using modern Computer Vision techniques supported by artificial intelligence. In the article, we present a method of identifying plastic waste based on the asymmetry analysis of the histogram of the image containing the waste. The method is simple but effective (94%), which allows it to be implemented on devices with low computing power, in particular on microcomputers. Such de-vices will be used both at home and in waste sorting plants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=waste%20management" title="waste management">waste management</a>, <a href="https://publications.waset.org/abstracts/search?q=environmental%20protection" title=" environmental protection"> environmental protection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/155242/analysis-of-histogram-asymmetry-for-waste-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155242.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3134</span> Objects Tracking in Catadioptric Images Using Spherical Snake</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khald%20Anisse">Khald Anisse</a>, <a href="https://publications.waset.org/abstracts/search?q=Amina%20Radgui"> Amina Radgui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Rziza"> Mohammed Rziza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tracking objects on video sequences is a very challenging task in many works in computer vision applications. However, there is no article that treats this topic in catadioptric vision. This paper is an attempt that tries to describe a new approach of omnidirectional images processing based on inverse stereographic projection in the half-sphere. We used the spherical model proposed by Gayer and al. For object tracking, our work is based on snake method, with optimization using the Greedy algorithm, by adapting its different operators. The algorithm will respect the deformed geometries of omnidirectional images such as spherical neighborhood, spherical gradient and reformulation of optimization algorithm on the spherical domain. This tracking method that we call "spherical snake" permitted to know the change of the shape and the size of object in different replacements in the spherical image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=spherical%20snake" title=" spherical snake"> spherical snake</a>, <a href="https://publications.waset.org/abstracts/search?q=omnidirectional%20image" title=" omnidirectional image"> omnidirectional image</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20stereographic%20projection" title=" inverse stereographic projection"> inverse stereographic projection</a> </p> <a href="https://publications.waset.org/abstracts/2285/objects-tracking-in-catadioptric-images-using-spherical-snake" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2285.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3133</span> Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eugene%20Y.%20J.%20Aw">Eugene Y. J. Aw</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20W.%20Koh"> J. W. Koh</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20H.%20Chew"> S. H. Chew</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20E.%20Chua"> K. E. Chua</a>, <a href="https://publications.waset.org/abstracts/search?q=Lionel%20L.%20J.%20Ang"> Lionel L. J. Ang</a>, <a href="https://publications.waset.org/abstracts/search?q=Algernon%20C.%20S.%20Hong"> Algernon C. S. Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Danette%20S.%20E.%20Tan"> Danette S. E. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Grace%20H.%20B.%20Foo"> Grace H. B. Foo</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Q.%20Hong"> K. Q. Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20M.%20Cheng"> L. M. Cheng</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20L.%20Leong"> M. L. Leong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Computer%20vision%20technique" title="Computer vision technique">Computer vision technique</a>, <a href="https://publications.waset.org/abstracts/search?q=cone%20penetration%20test" title=" cone penetration test"> cone penetration test</a>, <a href="https://publications.waset.org/abstracts/search?q=electrical%20resistivity" title=" electrical resistivity"> electrical resistivity</a>, <a href="https://publications.waset.org/abstracts/search?q=rapid%20and%20non-destructive" title=" rapid and non-destructive"> rapid and non-destructive</a>, <a href="https://publications.waset.org/abstracts/search?q=soil%20classification" title=" soil classification"> soil classification</a> </p> <a href="https://publications.waset.org/abstracts/132538/rapid-soil-classification-using-computer-vision-electrical-resistivity-and-soil-strength" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132538.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">218</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3132</span> A Combined Approach Based on Artificial Intelligence and Computer Vision for Qualitative Grading of Rice Grains</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hemad%20Zareiforoush">Hemad Zareiforoush</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Minaei"> Saeed Minaei</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Banakar"> Ahmad Banakar</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Reza%20Alizadeh"> Mohammad Reza Alizadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The quality inspection of rice (Oryza sativa L.) during its various processing stages is very important. In this research, an artificial intelligence-based model coupled with computer vision techniques was developed as a decision support system for qualitative grading of rice grains. For conducting the experiments, first, 25 samples of rice grains with different levels of percentage of broken kernels (PBK) and degree of milling (DOM) were prepared and their qualitative grade was assessed by experienced experts. Then, the quality parameters of the same samples examined by experts were determined using a machine vision system. A grading model was developed based on fuzzy logic theory in MATLAB software for making a relationship between the qualitative characteristics of the product and its quality. Totally, 25 rules were used for qualitative grading based on AND operator and Mamdani inference system. The fuzzy inference system was consisted of two input linguistic variables namely, DOM and PBK, which were obtained by the machine vision system, and one output variable (quality of the product). The model output was finally defuzzified using Center of Maximum (COM) method. In order to evaluate the developed model, the output of the fuzzy system was compared with experts’ assessments. It was revealed that the developed model can estimate the qualitative grade of the product with an accuracy of 95.74%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title="machine vision">machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic" title=" fuzzy logic"> fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=rice" title=" rice"> rice</a>, <a href="https://publications.waset.org/abstracts/search?q=quality" title=" quality"> quality</a> </p> <a href="https://publications.waset.org/abstracts/9943/a-combined-approach-based-on-artificial-intelligence-and-computer-vision-for-qualitative-grading-of-rice-grains" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3131</span> Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eugene%20Y.%20J.%20Aw">Eugene Y. J. Aw</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20W.%20Koh"> J. W. Koh</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20H.%20Chew"> S. H. Chew</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20E.%20Chua"> K. E. Chua</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20L.%20Goh"> P. L. Goh</a>, <a href="https://publications.waset.org/abstracts/search?q=Grace%20H.%20B.%20Foo"> Grace H. B. Foo</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20L.%20Leong"> M. L. Leong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision%20technique" title="computer vision technique">computer vision technique</a>, <a href="https://publications.waset.org/abstracts/search?q=cone%20penetration%20test" title=" cone penetration test"> cone penetration test</a>, <a href="https://publications.waset.org/abstracts/search?q=electrical%20resistivity" title=" electrical resistivity"> electrical resistivity</a>, <a href="https://publications.waset.org/abstracts/search?q=rapid%20and%20non-destructive" title=" rapid and non-destructive"> rapid and non-destructive</a>, <a href="https://publications.waset.org/abstracts/search?q=soil%20classification" title=" soil classification"> soil classification</a> </p> <a href="https://publications.waset.org/abstracts/144895/rapid-soil-classification-using-computer-vision-with-electrical-resistivity-and-soil-strength" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144895.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">239</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3130</span> Cone Contrast Sensitivity of Normal Trichromats and Those with Red-Green Dichromats</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tatsuya%20Iizuka">Tatsuya Iizuka</a>, <a href="https://publications.waset.org/abstracts/search?q=Takushi%20Kawamorita"> Takushi Kawamorita</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomoya%20Handa"> Tomoya Handa</a>, <a href="https://publications.waset.org/abstracts/search?q=Hitoshi%20Ishikawa"> Hitoshi Ishikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We report normative cone contrast sensitivity values and sensitivity and specificity values for a computer-based color vision test, the cone contrast test-HD (CCT-HD). The participants included 50 phakic eyes with normal color vision (NCV) and 20 dichromatic eyes (ten with protanopia and ten with deuteranopia). The CCT-HD was used to measure L, M, and S-CCT-HD scores (color vision deficiency, L-, M-cone logCS≦1.65, S-cone logCS≦0.425) to investigate the sensitivity and specificity of CCT-HD based on anomalous-type diagnosis with animalscope. The mean ± standard error L-, M-, S-cone logCS for protanopia were 0.90±0.04, 1.65±0.03, and 0.63±0.02, respectively; for deuteranopia 1.74±0.03, 1.31±0.03, and 0.61±0.06, respectively; and for age-matched NCV were 1.89±0.04, 1.84±0.04, and 0.60±0.03, respectively, with significant differences for each group except for S-CCT-HD (Bonferroni corrected α = 0.0167, p < 0.0167). The sensitivity and specificity of CCT-HD were 100% for protan and deutan in diagnosing abnormal types from 20 to 64 years of age, but the specificity decreased to 65% for protan and 55% for deutan in older persons > 65. CCT-HD is comparable to the diagnostic performance of the anomalous type in the anomaloscope for the 20-64-year-old age group. However, the results should be interpreted cautiously in those ≥ 65 years. They are more susceptible to acquired color vision deficiencies due to the yellowing of the crystalline lens and other factors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20test%20HD" title="cone contrast test HD">cone contrast test HD</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20vision%20test" title=" color vision test"> color vision test</a>, <a href="https://publications.waset.org/abstracts/search?q=congenital%20color%20vision%20deficiency" title=" congenital color vision deficiency"> congenital color vision deficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=red-green%20dichromacy" title=" red-green dichromacy"> red-green dichromacy</a>, <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20sensitivity" title=" cone contrast sensitivity"> cone contrast sensitivity</a> </p> <a href="https://publications.waset.org/abstracts/159154/cone-contrast-sensitivity-of-normal-trichromats-and-those-with-red-green-dichromats" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159154.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3129</span> FLIME - Fast Low Light Image Enhancement for Real-Time Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vinay%20P.">Vinay P.</a>, <a href="https://publications.waset.org/abstracts/search?q=Srinivas%20K.%20S."> Srinivas K. S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Low Light Image Enhancement is of utmost impor- tance in computer vision based tasks. Applications include vision systems for autonomous driving, night vision devices for defence systems, low light object detection tasks. Many of the existing deep learning methods are resource intensive during the inference step and take considerable time for processing. The algorithm should take considerably less than 41 milliseconds in order to process a real-time video feed with 24 frames per second and should be even less for a video with 30 or 60 frames per second. The paper presents a fast and efficient solution which has two main advantages, it has the potential to be used for a real-time video feed, and it can be used in low compute environments because of the lightweight nature. The proposed solution is a pipeline of three steps, the first one is the use of a simple function to map input RGB values to output RGB values, the second is to balance the colors and the final step is to adjust the contrast of the image. Hence a custom dataset is carefully prepared using images taken in low and bright lighting conditions. The preparation of the dataset, the proposed model, the processing time are discussed in detail and the quality of the enhanced images using different methods is shown. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low%20light%20image%20enhancement" title="low light image enhancement">low light image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20video" title=" real-time video"> real-time video</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/144526/flime-fast-low-light-image-enhancement-for-real-time-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144526.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">204</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3128</span> Convolutional Neural Network and LSTM Applied to Abnormal Behaviour Detection from Highway Footage</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Marinho%20de%20Andrade">Rafael Marinho de Andrade</a>, <a href="https://publications.waset.org/abstracts/search?q=Elcio%20Hideti%20Shiguemori"> Elcio Hideti Shiguemori</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Duarte%20Coelho%20dos%20Santos"> Rafael Duarte Coelho dos Santos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Relying on computer vision, many clever things are possible in order to make the world safer and optimized on resource management, especially considering time and attention as manageable resources, once the modern world is very abundant in cameras from inside our pockets to above our heads while crossing the streets. Thus, automated solutions based on computer vision techniques to detect, react, or even prevent relevant events such as robbery, car crashes and traffic jams can be accomplished and implemented for the sake of both logistical and surveillance improvements. In this paper, we present an approach for vehicles’ abnormal behaviors detection from highway footages, in which the vectorial data of the vehicles’ displacement are extracted directly from surveillance cameras footage through object detection and tracking with a deep convolutional neural network and inserted into a long-short term memory neural network for behavior classification. The results show that the classifications of behaviors are consistent and the same principles may be applied to other trackable objects and scenarios as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=behavior%20detection" title=" behavior detection"> behavior detection</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=highway%20footage" title=" highway footage"> highway footage</a> </p> <a href="https://publications.waset.org/abstracts/144246/convolutional-neural-network-and-lstm-applied-to-abnormal-behaviour-detection-from-highway-footage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144246.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3127</span> UAV Based Visual Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Dalmia">Vaibhav Dalmia</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Phirke"> Manoj Phirke</a>, <a href="https://publications.waset.org/abstracts/search?q=Renith%20G"> Renith G</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide adoption of UAVs (unmanned aerial vehicles) in various industries by the government as well as private corporations for solving computer vision tasks it’s necessary that their potential is analyzed completely. Recent advances in Deep Learning have also left us with a plethora of algorithms to solve different computer vision tasks. This study provides a comprehensive survey on solving the Visual Object Tracking problem and explains the tradeoffs involved in building a real-time yet reasonably accurate object tracking system for UAVs by looking at existing methods and evaluating them on the aerial datasets. Finally, the best trackers suitable for UAV-based applications are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=single%20object%20tracking" title=" single object tracking"> single object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20object%20tracking" title=" visual object tracking"> visual object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=UAVs" title=" UAVs"> UAVs</a> </p> <a href="https://publications.waset.org/abstracts/145331/uav-based-visual-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=105">105</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=106">106</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20vision&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>