CINXE.COM
Search results for: Opencv
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Opencv</title> <meta name="description" content="Search results for: Opencv"> <meta name="keywords" content="Opencv"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Opencv" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Opencv"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 25</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Opencv</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Improvements in OpenCV's Viola Jones Algorithm in Face Detection–Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jyoti%20Bharti">Jyoti Bharti</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20K.%20Gupta"> M. K. Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Astha%20Jain"> Astha Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a new improved approach for false positives filtering of detected face images on OpenCV’s Viola Jones Algorithm In this approach, for Filtering of False Positives, Skin Detection in two colour spaces i.e. HSV (Hue, Saturation and Value) and YCrCb (Y is luma component and Cr- red difference, Cb- Blue difference) is used. As a result, it is found that false detection has been reduced. Our proposed method reaches the accuracy of about 98.7%. Thus, a better recognition rate is achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title="face detection">face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Viola%20Jones" title=" Viola Jones"> Viola Jones</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positives" title=" false positives"> false positives</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a> </p> <a href="https://publications.waset.org/abstracts/48849/improvements-in-opencvs-viola-jones-algorithm-in-face-detection-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">407</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20K.%20Adedeji">J. K. Adedeji</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20O.%20Oyekanmi"> M. O. Oyekanmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20characters" title="biometric characters">biometric characters</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a> </p> <a href="https://publications.waset.org/abstracts/93018/a-neuron-model-of-facial-recognition-and-detection-of-an-authorized-entity-using-machine-learning-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93018.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Advancing in Cricket Analytics: Novel Approaches for Pitch and Ball Detection Employing OpenCV and YOLOV8</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pratham%20Madnur">Pratham Madnur</a>, <a href="https://publications.waset.org/abstracts/search?q=Prathamkumar%20Shetty"> Prathamkumar Shetty</a>, <a href="https://publications.waset.org/abstracts/search?q=Sneha%20Varur"> Sneha Varur</a>, <a href="https://publications.waset.org/abstracts/search?q=Gouri%20Parashetti"> Gouri Parashetti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to overcome conventional obstacles, this research paper investigates novel approaches for cricket pitch and ball detection that make use of cutting-edge technologies. The research integrates OpenCV for pitch inspection and modifies the YOLOv8 model for cricket ball detection in order to overcome the shortcomings of manual pitch assessment and traditional ball detection techniques. To ensure flexibility in a range of pitch environments, the pitch detection method leverages OpenCV’s color space transformation, contour extraction, and accurate color range defining features. Regarding ball detection, the YOLOv8 model emphasizes the preservation of minor object details to improve accuracy and is specifically trained to the unique properties of cricket balls. The methods are more reliable because of the careful preparation of the datasets, which include novel ball and pitch information. These cutting-edge methods not only improve cricket analytics but also set the stage for flexible methods in more general sports technology applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title="OpenCV">OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title=" YOLOv8"> YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=cricket" title=" cricket"> cricket</a>, <a href="https://publications.waset.org/abstracts/search?q=custom%20dataset" title=" custom dataset"> custom dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=sports" title=" sports"> sports</a> </p> <a href="https://publications.waset.org/abstracts/182020/advancing-in-cricket-analytics-novel-approaches-for-pitch-and-ball-detection-employing-opencv-and-yolov8" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Image Processing and Calculation of NGRDI Embedded System in Raspberry</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Efren%20Lopez%20Jimenez">Efren Lopez Jimenez</a>, <a href="https://publications.waset.org/abstracts/search?q=Maria%20Isabel%20Cajero"> Maria Isabel Cajero</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Irving-Vasqueza"> J. Irving-Vasqueza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use and processing of digital images have opened up new opportunities for the resolution of problems of various kinds, such as the calculation of different vegetation indexes, among other things, differentiating healthy vegetation from humid vegetation. However, obtaining images from which these indexes are calculated is still the exclusive subject of active research. In the present work, we propose to obtain these images using a low cost embedded system (Raspberry Pi) and its processing, using a set of libraries of open code called OpenCV, in order to obtain the Normalized Red-Green Difference Index (NGRDI). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raspberry%20Pi" title="Raspberry Pi">Raspberry Pi</a>, <a href="https://publications.waset.org/abstracts/search?q=vegetation%20index" title=" vegetation index"> vegetation index</a>, <a href="https://publications.waset.org/abstracts/search?q=Normalized%20Red-Green%20Difference%20Index%20%28NGRDI%29" title=" Normalized Red-Green Difference Index (NGRDI)"> Normalized Red-Green Difference Index (NGRDI)</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a> </p> <a href="https://publications.waset.org/abstracts/72145/image-processing-and-calculation-of-ngrdi-embedded-system-in-raspberry" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72145.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">291</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Proposal for a Web System for the Control of Fungal Diseases in Grapes in Fruits Markets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Tarme%C3%B1o%20Noriega">Carlos Tarmeño Noriega</a>, <a href="https://publications.waset.org/abstracts/search?q=Igor%20Aguilar%20Alonso"> Igor Aguilar Alonso</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fungal diseases are common in vineyards; they cause a decrease in the quality of the products that can be sold, generating distrust of the customer towards the seller when buying fruit. Currently, technology allows the classification of fruits according to their characteristics thanks to artificial intelligence. This study proposes the implementation of a control system that allows the identification of the main fungal diseases present in the Italia grape, making use of a convolutional neural network (CNN), OpenCV, and TensorFlow. The methodology used was based on a collection of 20 articles referring to the proposed research on quality control, classification, and recognition of fruits through artificial vision techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20control" title=" quality control"> quality control</a>, <a href="https://publications.waset.org/abstracts/search?q=fruit%20market" title=" fruit market"> fruit market</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=TensorFlow" title=" TensorFlow"> TensorFlow</a> </p> <a href="https://publications.waset.org/abstracts/160550/proposal-for-a-web-system-for-the-control-of-fungal-diseases-in-grapes-in-fruits-markets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160550.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Underneath Vehicle Inspection Using Fuzzy Logic, Subsumption, and Open Cv-Library</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hazim%20Abdulsada">Hazim Abdulsada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The inspection of underneath vehicle system has been given significant attention by governments after the threat of terrorism become more prevalent. New technologies such as mobile robots and computer vision are led to have more secure environment. This paper proposed that a mobile robot like Aria robot can be used to search and inspect the bombs under parking a lot vehicle. This robot is using fuzzy logic and subsumption algorithms to control the robot that movies underneath the vehicle. An OpenCV library and laser Hokuyo are added to Aria robot to complete the experiment for under vehicle inspection. This experiment was conducted at the indoor environment to demonstrate the efficiency of our methods to search objects and control the robot movements under vehicle. We got excellent results not only by controlling the robot movement but also inspecting object by the robot camera at same time. This success allowed us to know the requirement to construct a new cost effective robot with more functionality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic" title="fuzzy logic">fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20robots" title=" mobile robots"> mobile robots</a>, <a href="https://publications.waset.org/abstracts/search?q=Opencv" title=" Opencv"> Opencv</a>, <a href="https://publications.waset.org/abstracts/search?q=subsumption" title=" subsumption"> subsumption</a>, <a href="https://publications.waset.org/abstracts/search?q=under%20vehicle%20inspection" title=" under vehicle inspection "> under vehicle inspection </a> </p> <a href="https://publications.waset.org/abstracts/20775/underneath-vehicle-inspection-using-fuzzy-logic-subsumption-and-open-cv-library" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20775.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">472</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Pavlova">Maria Pavlova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera" title="camera">camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=Raspberry" title=" Raspberry"> Raspberry</a> </p> <a href="https://publications.waset.org/abstracts/81695/object-recognition-system-operating-from-different-type-vehicles-using-raspberry-and-opencv" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81695.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">218</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Training of Future Computer Science Teachers Based on Machine Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Meruert%20Serik">Meruert Serik</a>, <a href="https://publications.waset.org/abstracts/search?q=Nassipzhan%20Duisegaliyeva"> Nassipzhan Duisegaliyeva</a>, <a href="https://publications.waset.org/abstracts/search?q=Danara%20Tleumagambetova"> Danara Tleumagambetova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The article highlights and describes the characteristic features of real-time face detection in images and videos using machine learning algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As a result, the advantages and disadvantages of Haar Cascade (Haar Cascade OpenCV), HoG SVM (Histogram of Oriented Gradients, Support Vector Machine), and MMOD CNN Dlib (Max-Margin Object Detection, convolutional neural network) detectors used for face detection were determined. Dlib is a general-purpose cross-platform software library written in the programming language C++. It includes detectors used for determining face detection. The Cascade OpenCV algorithm is efficient for fast face detection. The considered work forms the basis for the development of machine learning methods by future computer science teachers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithm" title="algorithm">algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/170539/training-of-future-computer-science-teachers-based-on-machine-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170539.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Alphabet Recognition Using Pixel Probability Distribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaidehi%20Murarka">Vaidehi Murarka</a>, <a href="https://publications.waset.org/abstracts/search?q=Sneha%20Mehta"> Sneha Mehta</a>, <a href="https://publications.waset.org/abstracts/search?q=Dishant%20Upadhyay"> Dishant Upadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contour-detection" title="contour-detection">contour-detection</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-processing" title=" pre-processing"> pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20coefficient" title=" recognition coefficient"> recognition coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=runtime-template%20generation" title=" runtime-template generation"> runtime-template generation</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20matrix" title=" weight matrix "> weight matrix </a> </p> <a href="https://publications.waset.org/abstracts/12115/alphabet-recognition-using-pixel-probability-distribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12115.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Gesture-Controlled Interface Using Computer Vision and Python</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vedant%20Vardhan%20Rathour">Vedant Vardhan Rathour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anant%20Agrawal"> Anant Agrawal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/193844/gesture-controlled-interface-using-computer-vision-and-python" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Implementation of a Low-Cost Driver Drowsiness Evaluation System Using a Thermal Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Isa%20Moazen">Isa Moazen</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Nahvi"> Ali Nahvi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Driver drowsiness is a major cause of vehicle accidents, and facial images are highly valuable to detect drowsiness. In this paper, we perform our research via a thermal camera to record drivers' facial images on a driving simulator. A robust real-time algorithm extracts the features using horizontal and vertical integration projection, contours, contour orientations, and cropping tools. The features are included four target areas on the cheeks and forehead. Qt compiler and OpenCV are used with two cameras with different resolutions. A high-resolution thermal camera is used for fifteen subjects, and a low-resolution one is used for a person. The results are investigated by four temperature plots and evaluated by observer rating of drowsiness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20driver%20assistance%20systems" title="advanced driver assistance systems">advanced driver assistance systems</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20imaging" title=" thermal imaging"> thermal imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=driver%20drowsiness%20detection" title=" driver drowsiness detection"> driver drowsiness detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/131366/implementation-of-a-low-cost-driver-drowsiness-evaluation-system-using-a-thermal-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131366.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Integrated Gesture and Voice-Activated Mouse Control System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dev%20Pratap%20Singh">Dev Pratap Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshika%20Hasija"> Harshika Hasija</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashwini%20S."> Ashwini S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computers using hand gestures and voice commands. The system leverages advanced computer vision techniques using the Media Pipe framework and OpenCV to detect and interpret real-time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the speech recognition library allows for seamless execution of tasks like web searches, location navigation, and gesture control in the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20assistant" title=" voice assistant"> voice assistant</a> </p> <a href="https://publications.waset.org/abstracts/193896/integrated-gesture-and-voice-activated-mouse-control-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">10</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Real Time Detection, Prediction and Reconstitution of Rain Drops</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Burahee">R. Burahee</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Chassinat"> B. Chassinat</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20de%20Laclos"> T. de Laclos</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20D%C3%A9p%C3%A9e"> A. Dépée</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sastim"> A. Sastim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to propose a solution to detect, predict and reconstitute rain drops in real time – during the night – using an embedded material with an infrared camera. To prevent the system from needing too high hardware resources, simple models are considered in a powerful image treatment algorithm reducing considerably calculation time in OpenCV software. Using a smart model – drops will be matched thanks to a process running through two consecutive pictures for implementing a sophisticated tracking system. With this system drops computed trajectory gives information for predicting their future location. Thanks to this technique, treatment part can be reduced. The hardware system composed by a Raspberry Pi is optimized to host efficiently this code for real time execution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=reconstitution" title="reconstitution">reconstitution</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction" title=" prediction"> prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=rain%20drop" title=" rain drop"> rain drop</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20time" title=" real time"> real time</a>, <a href="https://publications.waset.org/abstracts/search?q=raspberry" title=" raspberry"> raspberry</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared" title=" infrared"> infrared</a> </p> <a href="https://publications.waset.org/abstracts/12821/real-time-detection-prediction-and-reconstitution-of-rain-drops" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12821.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Dynamic Foot Pressure Measurement System Using Optical Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tanapon%20Keatsamarn">Tanapon Keatsamarn</a>, <a href="https://publications.waset.org/abstracts/search?q=Chuchart%20Pintavirooj"> Chuchart Pintavirooj</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Foot pressure measurement provides necessary information for diagnosis diseases, foot insole design, disorder prevention and other application. In this paper, dynamic foot pressure measurement is presented for pressure measuring with high resolution and accuracy. The dynamic foot pressure measurement system consists of hardware and software system. The hardware system uses a transparent acrylic plate and uses steel as the base. The glossy white paper is placed on the top of the transparent acrylic plate and covering with a black acrylic on the system to block external light. Lighting from LED strip entering around the transparent acrylic plate. The optical sensors, the digital cameras, are underneath the acrylic plate facing upwards. They have connected with software system to process and record foot pressure video in avi file. Visual Studio 2017 is used for software system using OpenCV library. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=foot" title="foot">foot</a>, <a href="https://publications.waset.org/abstracts/search?q=foot%20pressure" title=" foot pressure"> foot pressure</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20sensors" title=" optical sensors"> optical sensors</a> </p> <a href="https://publications.waset.org/abstracts/89148/dynamic-foot-pressure-measurement-system-using-optical-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89148.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">247</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Smoker Recognition from Lung X-Ray Images Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Moumita%20Chanda">Moumita Chanda</a>, <a href="https://publications.waset.org/abstracts/search?q=Md.%20Fazlul%20Karim%20Patwary"> Md. Fazlul Karim Patwary</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Smoking is one of the most popular recreational drug use behaviors, and it contributes to birth defects, COPD, heart attacks, and erectile dysfunction. To completely eradicate this disease, it is imperative that it be identified and treated. Numerous smoking cessation programs have been created, and they demonstrate how beneficial it may be to help someone stop smoking at the ideal time. A tomography meter is an effective smoking detector. Other wearables, such as RF-based proximity sensors worn on the collar and wrist to detect when the hand is close to the mouth, have been proposed in the past, but they are not impervious to deceptive variables. In this study, we create a machine that can discriminate between smokers and non-smokers in real-time with high sensitivity and specificity by watching and collecting the human lung and analyzing the X-ray data using machine learning. If it has the highest accuracy, this machine could be utilized in a hospital, in the selection of candidates for the army or police, or in university entrance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=smoker%20detection" title=" smoker detection"> smoker detection</a>, <a href="https://publications.waset.org/abstracts/search?q=non-smoker%20detection" title=" non-smoker detection"> non-smoker detection</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20Intelligence" title=" artificial Intelligence"> artificial Intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20Image%20detection" title=" X-ray Image detection"> X-ray Image detection</a> </p> <a href="https://publications.waset.org/abstracts/161109/smoker-recognition-from-lung-x-ray-images-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Open-Source YOLO CV For Detection of Dust on Solar PV Surface</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jeewan%20Rai">Jeewan Rai</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinzang"> Kinzang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yeshi%20Jigme%20Choden"> Yeshi Jigme Choden</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accumulation of dust on solar panels impacts the overall efficiency and the amount of energy they produce. While various techniques exist for detecting dust to schedule cleaning, many of these methods use MATLAB image processing tools and other licensed software, which can be financially burdensome. This study will investigate the efficiency of a free open-source computer vision library using the YOLO algorithm. The proposed approach has been tested on images of solar panels with varying dust levels through an experiment setup. The experimental findings illustrated the effectiveness of using the YOLO-based image classification method and the overall dust detection approach with an accuracy of 90% in distinguishing between clean and dusty panels. This open-source solution provides a cost effective and accessible alternative to commercial image processing tools, offering solutions for optimizing solar panel maintenance and enhancing energy production. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=YOLO" title="YOLO">YOLO</a>, <a href="https://publications.waset.org/abstracts/search?q=openCV" title=" openCV"> openCV</a>, <a href="https://publications.waset.org/abstracts/search?q=dust%20detection" title=" dust detection"> dust detection</a>, <a href="https://publications.waset.org/abstracts/search?q=solar%20panels" title=" solar panels"> solar panels</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/189289/open-source-yolo-cv-for-detection-of-dust-on-solar-pv-surface" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Development of a Computer Vision System for the Blind and Visually Impaired Person</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20C.%20Belleza">Rodrigo C. Belleza</a>, <a href="https://publications.waset.org/abstracts/search?q=Jr."> Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Roselyn%20A.%20Maa%C3%B1o"> Roselyn A. Maaño</a>, <a href="https://publications.waset.org/abstracts/search?q=Karl%20Patrick%20E.%20Camota"> Karl Patrick E. Camota</a>, <a href="https://publications.waset.org/abstracts/search?q=Darwin%20Kim%20Q.%20Bulawan"> Darwin Kim Q. Bulawan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may result from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithms" title="algorithms">algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20systems" title=" embedded systems"> embedded systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a> </p> <a href="https://publications.waset.org/abstracts/2016/development-of-a-computer-vision-system-for-the-blind-and-visually-impaired-person" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Imp_hist-Si: Improved Hybrid Image Segmentation Technique for Satellite Imagery to Decrease the Segmentation Error Rate</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neetu%20Manocha">Neetu Manocha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is a technique where a picture is parted into distinct parts having similar features which have a place with similar items. Various segmentation strategies have been proposed as of late by prominent analysts. But, after ultimate thorough research, the novelists have analyzed that generally, the old methods do not decrease the segmentation error rate. Then author finds the technique HIST-SI to decrease the segmentation error rates. In this technique, cluster-based and threshold-based segmentation techniques are merged together. After then, to improve the result of HIST-SI, the authors added the method of filtering and linking in this technique named Imp_HIST-SI to decrease the segmentation error rates. The goal of this research is to find a new technique to decrease the segmentation error rates and produce much better results than the HIST-SI technique. For testing the proposed technique, a dataset of Bhuvan – a National Geoportal developed and hosted by ISRO (Indian Space Research Organisation) is used. Experiments are conducted using Scikit-image & OpenCV tools of Python, and performance is evaluated and compared over various existing image segmentation techniques for several matrices, i.e., Mean Square Error (MSE) and Peak Signal Noise Ratio (PSNR). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title="satellite image">satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20rate" title=" error rate"> error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=MSE" title=" MSE"> MSE</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=HIST-SI" title=" HIST-SI"> HIST-SI</a>, <a href="https://publications.waset.org/abstracts/search?q=linking" title=" linking"> linking</a>, <a href="https://publications.waset.org/abstracts/search?q=filtering" title=" filtering"> filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=imp_HIST-SI" title=" imp_HIST-SI"> imp_HIST-SI</a> </p> <a href="https://publications.waset.org/abstracts/149905/imp-hist-si-improved-hybrid-image-segmentation-technique-for-satellite-imagery-to-decrease-the-segmentation-error-rate" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149905.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Deep Learning Approach to Trademark Design Code Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Girish%20J.%20Showkatramani">Girish J. Showkatramani</a>, <a href="https://publications.waset.org/abstracts/search?q=Arthi%20M.%20Krishna"> Arthi M. Krishna</a>, <a href="https://publications.waset.org/abstracts/search?q=Sashi%20Nareddi"> Sashi Nareddi</a>, <a href="https://publications.waset.org/abstracts/search?q=Naresh%20Nula"> Naresh Nula</a>, <a href="https://publications.waset.org/abstracts/search?q=Aaron%20Pepe"> Aaron Pepe</a>, <a href="https://publications.waset.org/abstracts/search?q=Glen%20Brown"> Glen Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=Greg%20Gabel"> Greg Gabel</a>, <a href="https://publications.waset.org/abstracts/search?q=Chris%20Doninger"> Chris Doninger</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trademark examination and approval is a complex process that involves analysis and review of the design components of the marks such as the visual representation as well as the textual data associated with marks such as marks' description. Currently, the process of identifying marks with similar visual representation is done manually in United States Patent and Trademark Office (USPTO) and takes a considerable amount of time. Moreover, the accuracy of these searches depends heavily on the experts determining the trademark design codes used to catalog the visual design codes in the mark. In this study, we explore several methods to automate trademark design code classification. Based on recent successes of convolutional neural networks in image classification, we have used several different convolutional neural networks such as Google’s Inception v3, Inception-ResNet-v2, and Xception net. The study also looks into other techniques to augment the results from CNNs such as using Open Source Computer Vision Library (OpenCV) to pre-process the images. This paper reports the results of the various models trained on year of annotated trademark images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=trademark%20design%20code" title="trademark design code">trademark design code</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=trademark%20image%20classification" title=" trademark image classification"> trademark image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=trademark%20image%20search" title=" trademark image search"> trademark image search</a>, <a href="https://publications.waset.org/abstracts/search?q=Inception-ResNet-v2" title=" Inception-ResNet-v2"> Inception-ResNet-v2</a> </p> <a href="https://publications.waset.org/abstracts/85337/deep-learning-approach-to-trademark-design-code-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anukriti%20Kumar">Anukriti Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanmay%20Singh"> Tanmay Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Dinesh%20Kumar%20Vishwakarma"> Dinesh Kumar Vishwakarma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiclass%20classification" title="multiclass classification">multiclass classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a> </p> <a href="https://publications.waset.org/abstracts/123190/intelligent-transport-system-classification-of-traffic-signs-using-deep-neural-networks-in-real-time" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123190.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Blessing%20Ojeme">Blessing Ojeme</a>, <a href="https://publications.waset.org/abstracts/search?q=Frederick%20Quinn"> Frederick Quinn</a>, <a href="https://publications.waset.org/abstracts/search?q=Russell%20Karls"> Russell Karls</a>, <a href="https://publications.waset.org/abstracts/search?q=Shannon%20Quinn"> Shannon Quinn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=2D" title="2D">2D</a>, <a href="https://publications.waset.org/abstracts/search?q=binarization" title=" binarization"> binarization</a>, <a href="https://publications.waset.org/abstracts/search?q=CLAHE" title=" CLAHE"> CLAHE</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=fluorescence%20microscopy" title=" fluorescence microscopy"> fluorescence microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=mitochondria" title=" mitochondria"> mitochondria</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/153306/fully-automated-methods-for-the-detection-and-segmentation-of-mitochondria-in-microscopy-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153306.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Vehicle Speed Estimation Using Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prodipta%20Bhowmik">Prodipta Bhowmik</a>, <a href="https://publications.waset.org/abstracts/search?q=Poulami%20Saha"> Poulami Saha</a>, <a href="https://publications.waset.org/abstracts/search?q=Preety%20Mehra"> Preety Mehra</a>, <a href="https://publications.waset.org/abstracts/search?q=Yogesh%20Soni"> Yogesh Soni</a>, <a href="https://publications.waset.org/abstracts/search?q=Triloki%20Nath%20Jha"> Triloki Nath Jha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In India, the smart city concept is growing day by day. So, for smart city development, a better traffic management and monitoring system is a very important requirement. Nowadays, road accidents increase due to more vehicles on the road. Reckless driving is mainly responsible for a huge number of accidents. So, an efficient traffic management system is required for all kinds of roads to control the traffic speed. The speed limit varies from road to road basis. Previously, there was a radar system but due to high cost and less precision, the radar system is unable to become favorable in a traffic management system. Traffic management system faces different types of problems every day and it has become a researchable topic on how to solve this problem. This paper proposed a computer vision and machine learning-based automated system for multiple vehicle detection, tracking, and speed estimation of vehicles using image processing. Detection of vehicles and estimating their speed from a real-time video is tough work to do. The objective of this paper is to detect vehicles and estimate their speed as accurately as possible. So for this, a real-time video is first captured, then the frames are extracted from that video, then from that frames, the vehicles are detected, and thereafter, the tracking of vehicles starts, and finally, the speed of the moving vehicles is estimated. The goal of this method is to develop a cost-friendly system that can able to detect multiple types of vehicles at the same time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title="OpenCV">OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=Haar%20Cascade%20classifier" title=" Haar Cascade classifier"> Haar Cascade classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=DLIB" title=" DLIB"> DLIB</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=centroid%20tracker" title=" centroid tracker"> centroid tracker</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20tracking" title=" vehicle tracking"> vehicle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20speed%20estimation" title=" vehicle speed estimation"> vehicle speed estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/153549/vehicle-speed-estimation-using-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20Oluigbo">David Oluigbo</a>, <a href="https://publications.waset.org/abstracts/search?q=Erik%20Hemberg"> Erik Hemberg</a>, <a href="https://publications.waset.org/abstracts/search?q=Nathan%20Shwatal"> Nathan Shwatal</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenqi%20Ding"> Wenqi Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Yin%20Yuan"> Yin Yuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Susanna%20Mierau"> Susanna Mierau</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=calcium%20imaging" title="calcium imaging">calcium imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20activity" title=" neural activity"> neural activity</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/161680/automated-computer-vision-analysis-pipeline-of-calcium-imaging-neuronal-network-activity-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161680.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Optical-Based Lane-Assist System for Rowing Boats</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stephen%20Tullis">Stephen Tullis</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20David%20DiDonato"> M. David DiDonato</a>, <a href="https://publications.waset.org/abstracts/search?q=Hong%20Sung%20Park"> Hong Sung Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rowing boats (shells) are often steered by a small rudder operated by one of the backward-facing rowers; the attention required of that athlete then slightly decreases the power that that athlete can provide. Reducing the steering distraction would then increase the overall boat speed. Races are straight 2000 m courses with each boat in a 13.5 m wide lane marked by small (~15 cm) widely-spaced (~10 m) buoys, and the boat trajectory is affected by both cross-currents and winds. An optical buoy recognition and tracking system has been developed that provides the boat’s location and orientation with respect to the lane edges. This information is provided to the steering athlete as either: a simple overlay on a video display, or fed to a simplified autopilot system giving steering directions to the athlete or directly controlling the rudder. The system is then effectively a “lane-assist” device but with small, widely-spaced lane markers viewed from a very shallow angle due to constraints on camera height. The image is captured with a lightweight 1080p webcam, and most of the image analysis is done in OpenCV. The colour RGB-image is converted to a grayscale using the difference of the red and blue channels, which provides good contrast between the red/yellow buoys and the water, sky, land background and white reflections and noise. Buoy detection is done with thresholding within a tight mask applied to the image. Robust linear regression using Tukey’s biweight estimator of the previously detected buoy locations is used to develop the mask; this avoids the false detection of noise such as waves (reflections) and, in particular, buoys in other lanes. The robust regression also provides the current lane edges in the camera frame that are used to calculate the displacement of the boat from the lane centre (lane location), and its yaw angle. The interception of the detected lane edges provides a lane vanishing point, and yaw angle can be calculated simply based on the displacement of this vanishing point from the camera axis and the image plane distance. Lane location is simply based on the lateral displacement of the vanishing point from any horizontal cut through the lane edges. The boat lane position and yaw are currently fed what is essentially a stripped down marine auto-pilot system. Currently, only the lane location is used in a PID controller of a rudder actuator with integrator anti-windup to deal with saturation of the rudder angle. Low Kp and Kd values decrease unnecessarily fast return to lane centrelines and response to noise, and limiters can be used to avoid lane departure and disqualification. Yaw is not used as a control input, as cross-winds and currents can cause a straight course with considerable yaw or crab angle. Mapping of the controller with rudder angle “overall effectiveness” has not been finalized - very large rudder angles stall and have decreased turning moments, but at less extreme angles the increased rudder drag slows the boat and upsets boat balance. The full system has many features similar to automotive lane-assist systems, but with the added constraints of the lane markers, camera positioning, control response and noise increasing the challenge. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auto-pilot" title="auto-pilot">auto-pilot</a>, <a href="https://publications.waset.org/abstracts/search?q=lane-assist" title=" lane-assist"> lane-assist</a>, <a href="https://publications.waset.org/abstracts/search?q=marine" title=" marine"> marine</a>, <a href="https://publications.waset.org/abstracts/search?q=optical" title=" optical"> optical</a>, <a href="https://publications.waset.org/abstracts/search?q=rowing" title=" rowing"> rowing</a> </p> <a href="https://publications.waset.org/abstracts/127018/optical-based-lane-assist-system-for-rowing-boats" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127018.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>