CINXE.COM

Search results for: canny edge detection

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: canny edge detection</title> <meta name="description" content="Search results for: canny edge detection"> <meta name="keywords" content="canny edge detection"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="canny edge detection" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="canny edge detection"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4190</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: canny edge detection</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4190</span> High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hanaa%20M.%20Abdelgawad">Hanaa M. Abdelgawad</a>, <a href="https://publications.waset.org/abstracts/search?q=Mona%20Safar"> Mona Safar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20M.%20Wahba"> Ayman M. Wahba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high%20level%20synthesis" title="high level synthesis">high level synthesis</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection" title=" canny edge detection"> canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hardware%20accelerators" title=" hardware accelerators"> hardware accelerators</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/21304/high-level-synthesis-of-canny-edge-detection-algorithm-on-zynq-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21304.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4189</span> Optimized Road Lane Detection Through a Combined Canny Edge Detection, Hough Transform, and Scaleable Region Masking Toward Autonomous Driving</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samane%20Sharifi%20Monfared">Samane Sharifi Monfared</a>, <a href="https://publications.waset.org/abstracts/search?q=Lavdie%20Rada"> Lavdie Rada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, autonomous vehicles are developing rapidly toward facilitating human car driving. One of the main issues is road lane detection for a suitable guidance direction and car accident prevention. This paper aims to improve and optimize road line detection based on a combination of camera calibration, the Hough transform, and Canny edge detection. The video processing is implemented using the Open CV library with the novelty of having a scale able region masking. The aim of the study is to introduce automatic road lane detection techniques with the user’s minimum manual intervention. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hough%20transform" title="hough transform">hough transform</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection" title=" canny edge detection"> canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=optimisation" title=" optimisation"> optimisation</a>, <a href="https://publications.waset.org/abstracts/search?q=scaleable%20masking" title=" scaleable masking"> scaleable masking</a>, <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title=" camera calibration"> camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=improving%20the%20quality%20of%20image" title=" improving the quality of image"> improving the quality of image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/156139/optimized-road-lane-detection-through-a-combined-canny-edge-detection-hough-transform-and-scaleable-region-masking-toward-autonomous-driving" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156139.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4188</span> Comparative Analysis of Edge Detection Techniques for Extracting Characters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rana%20Gill">Rana Gill</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandandeep%20Kaur"> Chandandeep Kaur </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Segmentation of images can be implemented using different fundamental algorithms like edge detection (discontinuity based segmentation), region growing (similarity based segmentation), iterative thresholding method. A comprehensive literature review relevant to the study gives description of different techniques for vehicle number plate detection and edge detection techniques widely used on different types of images. This research work is based on edge detection techniques and calculating threshold on the basis of five edge operators. Five operators used are Prewitt, Roberts, Sobel, LoG and Canny. Segmentation of characters present in different type of images like vehicle number plate, name plate of house and characters on different sign boards are selected as a case study in this work. The proposed methodology has seven stages. The proposed system has been implemented using MATLAB R2010a. Comparison of all the five operators has been done on the basis of their performance. From the results it is found that Canny operators produce best results among the used operators and performance of different edge operators in decreasing order is: Canny>Log>Sobel>Prewitt>Roberts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text" title=" text"> text</a>, <a href="https://publications.waset.org/abstracts/search?q=extracting%20characters" title=" extracting characters"> extracting characters</a> </p> <a href="https://publications.waset.org/abstracts/9054/comparative-analysis-of-edge-detection-techniques-for-extracting-characters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9054.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">426</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4187</span> Multiscale Edge Detection Based on Nonsubsampled Contourlet Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Enqing%20Chen">Enqing Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianbo%20Wang"> Jianbo Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is well known that the wavelet transform provides a very effective framework for multiscale edges analysis. However, wavelets are not very effective in representing images containing distributed discontinuities such as edges. In this paper, we propose a novel multiscale edge detection method in nonsubsampled contourlet transform (NSCT) domain, which is based on the dominant multiscale, multidirection edge expression and outstanding edge location of NSCT. Through real images experiments, simulation results demonstrate that the proposed method is better than other edge detection methods based on Canny operator, wavelet and contourlet. Additionally, the proposed method also works well for noisy images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title="edge detection">edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=NSCT" title=" NSCT"> NSCT</a>, <a href="https://publications.waset.org/abstracts/search?q=shift%20invariant" title=" shift invariant"> shift invariant</a>, <a href="https://publications.waset.org/abstracts/search?q=modulus%20maxima" title=" modulus maxima"> modulus maxima</a> </p> <a href="https://publications.waset.org/abstracts/9528/multiscale-edge-detection-based-on-nonsubsampled-contourlet-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9528.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">488</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4186</span> Lane Detection Using Labeling Based RANSAC Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yeongyu%20Choi">Yeongyu Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose labeling based RANSAC algorithm for lane detection. Advanced driver assistance systems (ADAS) have been widely researched to avoid unexpected accidents. Lane detection is a necessary system to assist keeping lane and lane departure prevention. The proposed vision based lane detection method applies Canny edge detection, inverse perspective mapping (IPM), K-means algorithm, mathematical morphology operations and 8 connected-component labeling. Next, random samples are selected from each labeling region for RANSAC. The sampling method selects the points of lane with a high probability. Finally, lane parameters of straight line or curve equations are estimated. Through the simulations tested on video recorded at daytime and nighttime, we show that the proposed method has better performance than the existing RANSAC algorithm in various environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Canny%20edge%20detection" title="Canny edge detection">Canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means%20algorithm" title=" k-means algorithm"> k-means algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20perspective%20mapping" title=" inverse perspective mapping"> inverse perspective mapping</a> </p> <a href="https://publications.waset.org/abstracts/92894/lane-detection-using-labeling-based-ransac-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92894.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">244</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4185</span> Intelligent Crowd Management Systems in Trains</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sai%20S.%20Hari">Sai S. Hari</a>, <a href="https://publications.waset.org/abstracts/search?q=Shriram%20Ramanujam"> Shriram Ramanujam</a>, <a href="https://publications.waset.org/abstracts/search?q=Unnati%20Trivedi"> Unnati Trivedi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The advent of mass transit systems like rail, metro, maglev, and various other rail based transport has pacified the requirement of public transport for the masses to a great extent. However, the abatement of the demand does not necessarily mean it is managed efficiently, eloquently or in an encapsulating manner. The primary problem identified that the one this paper seeks to solve is the dipsomaniac like manner in which the compartments are occupied. This problem is solved by using a comparison of an empty train and an occupied one. The pixel data of an occupied train is compared to the pixel data of an empty train. This is done using canny edge detection technique. After the comparison it intimates the passengers at the consecutive stops which compartments are not occupied or have low occupancy. Thus, redirecting them and preventing overcrowding. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection" title="canny edge detection">canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=comparison" title=" comparison"> comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=encapsulation" title=" encapsulation"> encapsulation</a>, <a href="https://publications.waset.org/abstracts/search?q=redirection" title=" redirection"> redirection</a> </p> <a href="https://publications.waset.org/abstracts/35655/intelligent-crowd-management-systems-in-trains" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35655.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4184</span> Hand Gesture Detection via EmguCV Canny Pruning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20N.%20Mosola">N. N. Mosola</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Molete"> S. J. Molete</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20S.%20Masoebe"> L. S. Masoebe</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Letsae"> M. Letsae</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho&rsquo;s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canny%20pruning" title="canny pruning">canny pruning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20tracking" title=" skin tracking"> skin tracking</a> </p> <a href="https://publications.waset.org/abstracts/91296/hand-gesture-detection-via-emgucv-canny-pruning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4183</span> Refined Edge Detection Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omar%20Elharrouss">Omar Elharrouss</a>, <a href="https://publications.waset.org/abstracts/search?q=Youssef%20Hmamouche"> Youssef Hmamouche</a>, <a href="https://publications.waset.org/abstracts/search?q=Assia%20Kamal%20Idrissi"> Assia Kamal Idrissi</a>, <a href="https://publications.waset.org/abstracts/search?q=Btissam%20El%20Khamlichi"> Btissam El Khamlichi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20El%20Fallah-Seghrouchni"> Amal El Fallah-Seghrouchni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title="edge detection">edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-representation" title=" scale-representation"> scale-representation</a>, <a href="https://publications.waset.org/abstracts/search?q=backbone" title=" backbone"> backbone</a> </p> <a href="https://publications.waset.org/abstracts/150865/refined-edge-detection-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150865.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4182</span> A Study of Common Carotid Artery Behavior from B-Mode Ultrasound Image for Different Gender and BMI Categories</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nabilah%20Ibrahim">Nabilah Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaliza%20Musa"> Khaliza Musa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The increment thickness of intima-media thickness (IMT) which involves the changes of diameter of the carotid artery is one of the early symptoms of the atherosclerosis lesion. The manual measurement of arterial diameter is time consuming and lack of reproducibility. Thus, this study reports the automatic approach to find the arterial diameter behavior for different gender, and body mass index (BMI) categories, focus on tracked region. BMI category is divided into underweight, normal, and overweight categories. Canny edge detection is employed to the B-mode image to extract the important information to be deal as the carotid wall boundary. The result shows the significant difference of arterial diameter between male and female groups which is 2.5% difference. In addition, the significant result of differences of arterial diameter for BMI category is the decreasing of arterial diameter proportional to the BMI. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=B-mode%20Ultrasound%20Image" title="B-mode Ultrasound Image">B-mode Ultrasound Image</a>, <a href="https://publications.waset.org/abstracts/search?q=carotid%20artery%20diameter" title=" carotid artery diameter"> carotid artery diameter</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection" title=" canny edge detection"> canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=body%20mass%20index" title=" body mass index"> body mass index</a> </p> <a href="https://publications.waset.org/abstracts/23345/a-study-of-common-carotid-artery-behavior-from-b-mode-ultrasound-image-for-different-gender-and-bmi-categories" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23345.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">444</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4181</span> Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Harshit%20Mittal">Harshit Mittal</a>, <a href="https://publications.waset.org/abstracts/search?q=Neeraj%20Garg"> Neeraj Garg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20symbol%20recognition" title="hand symbol recognition">hand symbol recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Canny%20edge%20algorithm" title=" Canny edge algorithm"> Canny edge algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/176451/hand-symbol-recognition-using-canny-edge-algorithm-and-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">65</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4180</span> New Efficient Method for Coding Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Walaa%20M.Abd-Elhafiez">Walaa M.Abd-Elhafiez</a>, <a href="https://publications.waset.org/abstracts/search?q=Wajeb%20Gharibi"> Wajeb Gharibi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper a novel color image compression technique for efficient storage and delivery of data is proposed. The proposed compression technique started by RGB to YCbCr color transformation process. Secondly, the canny edge detection method is used to classify the blocks into edge and non-edge blocks. Each color component Y, Cb, and Cr compressed by discrete cosine transform (DCT) process, quantizing and coding step by step using adaptive arithmetic coding. Our technique is concerned with the compression ratio, bits per pixel and peak signal to noise ratio, and produce better results than JPEG and more recent published schemes (like, CBDCT-CABS and MHC). The provided experimental results illustrate the proposed technique which is efficient and feasible in terms of compression ratio, bits per pixel and peak signal to noise ratio. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20compression" title="image compression">image compression</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20image" title=" color image"> color image</a>, <a href="https://publications.waset.org/abstracts/search?q=q-coder" title=" q-coder"> q-coder</a>, <a href="https://publications.waset.org/abstracts/search?q=quantization" title=" quantization"> quantization</a>, <a href="https://publications.waset.org/abstracts/search?q=edge-detection" title=" edge-detection"> edge-detection</a> </p> <a href="https://publications.waset.org/abstracts/2342/new-efficient-method-for-coding-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">330</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4179</span> Image Processing Approach for Detection of Three-Dimensional Tree-Rings from X-Ray Computed Tomography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jorge%20Martinez-Garcia">Jorge Martinez-Garcia</a>, <a href="https://publications.waset.org/abstracts/search?q=Ingrid%20Stelzner"> Ingrid Stelzner</a>, <a href="https://publications.waset.org/abstracts/search?q=Joerg%20Stelzner"> Joerg Stelzner</a>, <a href="https://publications.waset.org/abstracts/search?q=Damian%20Gwerder"> Damian Gwerder</a>, <a href="https://publications.waset.org/abstracts/search?q=Philipp%20Schuetz"> Philipp Schuetz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tree-ring analysis is an important part of the quality assessment and the dating of (archaeological) wood samples. It provides quantitative data about the whole anatomical ring structure, which can be used, for example, to measure the impact of the fluctuating environment on the tree growth, for the dendrochronological analysis of archaeological wooden artefacts and to estimate the wood mechanical properties. Despite advances in computer vision and edge recognition algorithms, detection and counting of annual rings are still limited to 2D datasets and performed in most cases manually, which is a time consuming, tedious task and depends strongly on the operator&rsquo;s experience. This work presents an image processing approach to detect the whole 3D tree-ring structure directly from X-ray computed tomography imaging data. The approach relies on a modified Canny edge detection algorithm, which captures fully connected tree-ring edges throughout the measured image stack and is validated on X-ray computed tomography data taken from six wood species. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ring%20recognition" title="ring recognition">ring recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20computed%20tomography" title=" X-ray computed tomography"> X-ray computed tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=dendrochronology" title=" dendrochronology"> dendrochronology</a> </p> <a href="https://publications.waset.org/abstracts/130684/image-processing-approach-for-detection-of-three-dimensional-tree-rings-from-x-ray-computed-tomography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130684.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">220</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4178</span> Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emhimed%20Saffor">Emhimed Saffor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection "> edge detection </a> </p> <a href="https://publications.waset.org/abstracts/44926/subjective-evaluation-of-mathematical-morphology-edge-detection-on-computed-tomography-ct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4177</span> A Palmprint Identification System Based Multi-Layer Perceptron</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20P.%20Tantua">David P. Tantua</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdulkader%20Helwan"> Abdulkader Helwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=biological%20traits" title=" biological traits"> biological traits</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20perceptron%20neural%20network" title=" multi-layer perceptron neural network"> multi-layer perceptron neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20skeletonizing" title=" image skeletonizing"> image skeletonizing</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection%20using%20canny%20operator" title=" edge detection using canny operator"> edge detection using canny operator</a> </p> <a href="https://publications.waset.org/abstracts/26617/a-palmprint-identification-system-based-multi-layer-perceptron" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26617.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4176</span> Fabrication and Analysis of Simplified Dragonfly Wing Structures Created Using Balsa Wood and Red Prepreg Fibre Glass for Use in Biomimetic Micro Air Vehicles</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Praveena%20Nair%20Sivasankaran">Praveena Nair Sivasankaran</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Arthur%20Ward"> Thomas Arthur Ward</a>, <a href="https://publications.waset.org/abstracts/search?q=Rubentheren%20Viyapuri"> Rubentheren Viyapuri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Paper describes a methodology to fabricate a simplified dragonfly wing structure using balsa wood and red prepreg fibre glass. These simplified wing structures were created for use in Biomimetic Micro Air Vehicles (BMAV). Dragonfly wings are highly corrugated and possess complex vein structures. In order to mimic the wings function and retain its properties, a simplified version of the wing was designed. The simplified dragonfly wing structure was created using a method called spatial network analysis which utilizes Canny edge detection method. The vein structure of the wings were carved out in balsa wood and red prepreg fibre glass. Balsa wood and red prepreg fibre glass was chosen due to its ultra- lightweight property and hence, highly suitable to be used in our application. The fabricated structure was then immersed in a nanocomposite solution containing chitosan as a film matrix, reinforced with chitin nanowhiskers and tannic acid as a crosslinking agent. These materials closely mimic the membrane of a dragonfly wing. Finally, the wings were subjected to a bending test and comparisons were made with previous research for verification. The results had a margin of difference of about 3% and thus the structure was validated. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dragonfly%20wings" title="dragonfly wings">dragonfly wings</a>, <a href="https://publications.waset.org/abstracts/search?q=simplified" title=" simplified"> simplified</a>, <a href="https://publications.waset.org/abstracts/search?q=Canny%20edge%20detection" title=" Canny edge detection"> Canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=balsa%20wood" title=" balsa wood"> balsa wood</a>, <a href="https://publications.waset.org/abstracts/search?q=red%20prepreg" title=" red prepreg"> red prepreg</a>, <a href="https://publications.waset.org/abstracts/search?q=chitin" title=" chitin"> chitin</a>, <a href="https://publications.waset.org/abstracts/search?q=chitosan" title=" chitosan"> chitosan</a>, <a href="https://publications.waset.org/abstracts/search?q=tannic%20acid" title=" tannic acid"> tannic acid</a> </p> <a href="https://publications.waset.org/abstracts/28027/fabrication-and-analysis-of-simplified-dragonfly-wing-structures-created-using-balsa-wood-and-red-prepreg-fibre-glass-for-use-in-biomimetic-micro-air-vehicles" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28027.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">331</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4175</span> Similar Script Character Recognition on Kannada and Telugu</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gurukiran%20Veerapur">Gurukiran Veerapur</a>, <a href="https://publications.waset.org/abstracts/search?q=Nytik%20Birudavolu"> Nytik Birudavolu</a>, <a href="https://publications.waset.org/abstracts/search?q=Seetharam%20U.%20N."> Seetharam U. N.</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandravva%20Hebbi"> Chandravva Hebbi</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Praneeth%20Reddy"> R. Praneeth Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=base%20characters" title="base characters">base characters</a>, <a href="https://publications.waset.org/abstracts/search?q=modifiers" title=" modifiers"> modifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=guninthalu" title=" guninthalu"> guninthalu</a>, <a href="https://publications.waset.org/abstracts/search?q=aksharas" title=" aksharas"> aksharas</a>, <a href="https://publications.waset.org/abstracts/search?q=vattakshara" title=" vattakshara"> vattakshara</a>, <a href="https://publications.waset.org/abstracts/search?q=VAN" title=" VAN"> VAN</a> </p> <a href="https://publications.waset.org/abstracts/184438/similar-script-character-recognition-on-kannada-and-telugu" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">53</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4174</span> Detecting the Edge of Multiple Images in Parallel</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prakash%20K.%20Aithal">Prakash K. Aithal</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20Dinesh%20Acharya"> U. Dinesh Acharya</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajesh%20Gopakumar"> Rajesh Gopakumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Edge is variation of brightness in an image. Edge detection is useful in many application areas such as finding forests, rivers from a satellite image, detecting broken bone in a medical image etc. The paper discusses about finding edge of multiple aerial images in parallel .The proposed work tested on 38 images 37 colored and one monochrome image. The time taken to process N images in parallel is equivalent to time taken to process 1 image in sequential. The proposed method achieves pixel level parallelism as well as image level parallelism. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title="edge detection">edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=multicore" title=" multicore"> multicore</a>, <a href="https://publications.waset.org/abstracts/search?q=gpu" title=" gpu"> gpu</a>, <a href="https://publications.waset.org/abstracts/search?q=opencl" title=" opencl"> opencl</a>, <a href="https://publications.waset.org/abstracts/search?q=mpi" title=" mpi"> mpi</a> </p> <a href="https://publications.waset.org/abstracts/30818/detecting-the-edge-of-multiple-images-in-parallel" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30818.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4173</span> Implementation of Edge Detection Based on Autofluorescence Endoscopic Image of Field Programmable Gate Array</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hao%20Cheng">Hao Cheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhiwu%20Wang"> Zhiwu Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Guozheng%20Yan"> Guozheng Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Pingping%20Jiang"> Pingping Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Shijia%20Qin"> Shijia Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuai%20Kuang"> Shuai Kuang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autofluorescence Imaging (AFI) is a technology for detecting early carcinogenesis of the gastrointestinal tract in recent years. Compared with traditional white light endoscopy (WLE), this technology greatly improves the detection accuracy of early carcinogenesis, because the colors of normal tissues are different from cancerous tissues. Thus, edge detection can distinguish them in grayscale images. In this paper, based on the traditional Sobel edge detection method, optimization has been performed on this method which considers the environment of the gastrointestinal, including adaptive threshold and morphological processing. All of the processes are implemented on our self-designed system based on the image sensor OV6930 and Field Programmable Gate Array (FPGA), The system can capture the gastrointestinal image taken by the lens in real time and detect edges. The final experiments verified the feasibility of our system and the effectiveness and accuracy of the edge detection algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AFI" title="AFI">AFI</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20threshold" title=" adaptive threshold"> adaptive threshold</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20processing" title=" morphological processing"> morphological processing</a>, <a href="https://publications.waset.org/abstracts/search?q=OV6930" title=" OV6930"> OV6930</a>, <a href="https://publications.waset.org/abstracts/search?q=FPGA" title=" FPGA"> FPGA</a> </p> <a href="https://publications.waset.org/abstracts/102685/implementation-of-edge-detection-based-on-autofluorescence-endoscopic-image-of-field-programmable-gate-array" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102685.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4172</span> Concentric Circle Detection based on Edge Pre-Classification and Extended RANSAC</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhongjie%20Yu">Zhongjie Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hancheng%20Yu"> Hancheng Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an effective method to detect concentric circles with imperfect edges. First, the gradient of edge pixel is coded and a 2-D lookup table is built to speed up normal generation. Then we take an accumulator to estimate the rough center and collect plausible edges of concentric circles through gradient and distance. Later, we take the contour-based method, which takes the contour and edge intersection, to pre-classify the edges. Finally, we use the extended RANSAC method to find all the candidate circles. The center of concentric circles is determined by the two circles with the highest concentricity. Experimental results demonstrate that the proposed method has both good performance and accuracy for the detection of concentric circles. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=concentric%20circle%20detection" title="concentric circle detection">concentric circle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient" title=" gradient"> gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=contour" title=" contour"> contour</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20pre-classification" title=" edge pre-classification"> edge pre-classification</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a> </p> <a href="https://publications.waset.org/abstracts/144332/concentric-circle-detection-based-on-edge-pre-classification-and-extended-ransac" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4171</span> The Need for Multi-Edge Strategies and Solutions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hugh%20Taylor">Hugh Taylor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Industry analysts project that edge computing will be generating tens of billions in revenue in coming years. It’s not clear, however, if this will actually happen, and who, if anyone, will make it happen. Edge computing is seen as a critical success factor in industries ranging from telecom, enterprise IT and co-location. However, will any of these industries actually step up to make edge computing into a viable technology business? This paper looks at why the edge seems to be in a chasm, on the edge of realization, so to speak, but failing to coalesce into a coherent technology category like the cloud—and how the segment’s divergent industry players can come together to build a viable business at the edge. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20computing" title="edge computing">edge computing</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-edge%20strategies" title=" multi-edge strategies"> multi-edge strategies</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20data%20centers" title=" edge data centers"> edge data centers</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20cloud" title=" edge cloud"> edge cloud</a> </p> <a href="https://publications.waset.org/abstracts/154144/the-need-for-multi-edge-strategies-and-solutions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154144.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4170</span> Edge Detection Using Multi-Agent System: Evaluation on Synthetic and Medical MR Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Nachour">A. Nachour</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Ouzizi"> L. Ouzizi</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Aoura"> Y. Aoura</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent developments on multi-agent system have brought a new research field on image processing. Several algorithms are used simultaneously and improved in deferent applications while new methods are investigated. This paper presents a new automatic method for edge detection using several agents and many different actions. The proposed multi-agent system is based on parallel agents that locally perceive their environment, that is to say, pixels and additional environmental information. This environment is built using Vector Field Convolution that attract free agent to the edges. Problems of partial, hidden or edges linking are solved with the cooperation between agents. The presented method was implemented and evaluated using several examples on different synthetic and medical images. The obtained experimental results suggest that this approach confirm the efficiency and accuracy of detected edge. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title="edge detection">edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20MRImages" title=" medical MRImages"> medical MRImages</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-agent%20systems" title=" multi-agent systems"> multi-agent systems</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20field%20convolution" title=" vector field convolution"> vector field convolution</a> </p> <a href="https://publications.waset.org/abstracts/50615/edge-detection-using-multi-agent-system-evaluation-on-synthetic-and-medical-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50615.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">391</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4169</span> Mage Fusion Based Eye Tumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ashit">Ahmed Ashit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20tumor" title=" eye tumor"> eye tumor</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20operators" title=" canny operators"> canny operators</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposed" title=" superimposed"> superimposed</a> </p> <a href="https://publications.waset.org/abstracts/30750/mage-fusion-based-eye-tumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30750.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4168</span> Defect Detection for Nanofibrous Images with Deep Learning-Based Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaokai%20Liu">Gaokai Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic defect detection for nanomaterial images is widely required in industrial scenarios. Deep learning approaches are considered as the most effective solutions for the great majority of image-based tasks. In this paper, an edge guidance network for defect segmentation is proposed. First, the encoder path with multiple convolution and downsampling operations is applied to the acquisition of shared features. Then two decoder paths both are connected to the last convolution layer of the encoder and supervised by the edge and segmentation labels, respectively, to guide the whole training process. Meanwhile, the edge and encoder outputs from the same stage are concatenated to the segmentation corresponding part to further tune the segmentation result. Finally, the effectiveness of the proposed method is verified via the experiments on open nanofibrous datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=defect%20detection" title=" defect detection"> defect detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=nanomaterials" title=" nanomaterials"> nanomaterials</a> </p> <a href="https://publications.waset.org/abstracts/133093/defect-detection-for-nanofibrous-images-with-deep-learning-based-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4167</span> Distributed Framework for Pothole Detection and Monitoring Using Federated Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ezil%20Sam%20Leni">Ezil Sam Leni</a>, <a href="https://publications.waset.org/abstracts/search?q=Shalen%20S."> Shalen S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Transport service monitoring and upkeep are essential components of smart city initiatives. The main risks to the relevant departments and authorities are the ever-increasing vehicular traffic and the conditions of the roads. In India, the economy is greatly impacted by the road transport sector. In 2021, the Ministry of Road Transport and Highways Transport, Government of India, produced a report with statistical data on traffic accidents. The data included the number of fatalities, injuries, and other pertinent criteria. This study proposes a distributed infrastructure for the monitoring, detection, and reporting of potholes to the appropriate authorities. In a distributed environment, the nodes are the edge devices, and local edge servers, and global servers. The edge devices receive the initial model to be employed from the global server. The YOLOv8 model for pothole detection is used in the edge devices. The edge devices run the pothole detection model, gather the pothole images on their path, and send the updates to the nearby edge server. The local edge server selects the clients for its aggregation process, aggregates the model updates and sends the updates to the global server. The global server collects the updates from the local edge servers, performs aggregation and derives the updated model. The updated model has the information about the potholes received from the local edge servers and notifies the updates to the local edge servers and concerned authorities for monitoring and maintenance of road conditions. The entire process is implemented in FedCV distributed environment with the implementation using the client-server model and aggregation entities. After choosing the clients for its aggregation process, the local edge server gathers the model updates and transmits them to the global server. After gathering the updates from the regional edge servers, the global server aggregates them and creates the updated model. Performance indicators and the experimentation environment are assessed, discussed, and presented. Accelerometer data may be taken into consideration for improved performance in the future development of this study, in addition to the images captured from the transportation routes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=federated%20Learning" title="federated Learning">federated Learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pothole%20detection" title=" pothole detection"> pothole detection</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20framework" title=" distributed framework"> distributed framework</a>, <a href="https://publications.waset.org/abstracts/search?q=federated%20averaging" title=" federated averaging"> federated averaging</a> </p> <a href="https://publications.waset.org/abstracts/176254/distributed-framework-for-pothole-detection-and-monitoring-using-federated-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4166</span> Edge Detection and Morphological Image for Estimating Gestational Age Based on Fetus Length Automatically</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Retno%20Supriyanti">Retno Supriyanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Chuzaeri"> Ahmad Chuzaeri</a>, <a href="https://publications.waset.org/abstracts/search?q=Yogi%20Ramadhani"> Yogi Ramadhani</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Haris%20Budi%20Widodo"> A. Haris Budi Widodo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of ultrasonography in the medical world has been very popular including the diagnosis of pregnancy. In determining pregnancy, ultrasonography has many roles, such as to check the position of the fetus, abnormal pregnancy, fetal age and others. Unfortunately, all these things still need to analyze the role of the obstetrician in the sense of image raised by ultrasonography. One of the most striking is the determination of gestational age. Usually, it is done by measuring the length of the fetus manually by obstetricians. In this study, we developed a computer-aided diagnosis for the determination of gestational age by measuring the length of the fetus automatically using edge detection method and image morphology. Results showed that the system is sufficiently accurate in determining the gestational age based image processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20aided%20diagnosis" title="computer aided diagnosis">computer aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=gestational%20age" title=" gestational age"> gestational age</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20diameter%20of%20uterus" title=" and diameter of uterus"> and diameter of uterus</a>, <a href="https://publications.waset.org/abstracts/search?q=length%20of%20fetus" title=" length of fetus"> length of fetus</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection%20method" title=" edge detection method"> edge detection method</a>, <a href="https://publications.waset.org/abstracts/search?q=morphology%20image" title=" morphology image"> morphology image</a> </p> <a href="https://publications.waset.org/abstracts/46484/edge-detection-and-morphological-image-for-estimating-gestational-age-based-on-fetus-length-automatically" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46484.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4165</span> Investigating the Viability of Ultra-Low Parameter Count Networks for Real-Time Football Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tim%20Farrelly">Tim Farrelly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, AI-powered object detection systems have opened the doors for innovative new applications and products, especially those operating in the real world or ‘on edge’ – namely, in sport. This paper investigates the viability of an ultra-low parameter convolutional neural network specially designed for the detection of footballs on ‘on the edge’ devices. The main contribution of this paper is the exploration of integrating new design features (depth-wise separable convolutional blocks and squeezed and excitation modules) into an ultra-low parameter network and demonstrating subsequent improvements in performance. The results show that tracking the ball from Full HD images with negligibly high accu-racy is possible in real-time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision%20applications" title=" machine vision applications"> machine vision applications</a>, <a href="https://publications.waset.org/abstracts/search?q=sport" title=" sport"> sport</a>, <a href="https://publications.waset.org/abstracts/search?q=network%20design" title=" network design"> network design</a> </p> <a href="https://publications.waset.org/abstracts/145298/investigating-the-viability-of-ultra-low-parameter-count-networks-for-real-time-football-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145298.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4164</span> Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azam%20Fazelpour">Azam Fazelpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Reza%20Dehghani"> Saeed Reza Dehghani</a>, <a href="https://publications.waset.org/abstracts/search?q=Vlastimil%20Masek"> Vlastimil Masek</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuri%20S.%20Muzychka"> Yuri S. Muzychka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20image%20processing" title="color image processing">color image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=ice%20contour%20boundary" title=" ice contour boundary"> ice contour boundary</a>, <a href="https://publications.waset.org/abstracts/search?q=salt%20ice" title=" salt ice"> salt ice</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20image" title=" thermal image"> thermal image</a> </p> <a href="https://publications.waset.org/abstracts/61867/temperature-contour-detection-of-salt-ice-using-color-thermal-image-segmentation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61867.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">314</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4163</span> A Gradient Orientation Based Efficient Linear Interpolation Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Khan">S. Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Khan"> A. Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdul%20R.%20Soomrani"> Abdul R. Soomrani</a>, <a href="https://publications.waset.org/abstracts/search?q=Raja%20F.%20Zafar"> Raja F. Zafar</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Waqas"> A. Waqas</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Akbar"> G. Akbar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title="edge detection">edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20orientation" title=" gradient orientation"> gradient orientation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20upscaling" title=" image upscaling"> image upscaling</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20interpolation" title=" linear interpolation"> linear interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=slope%20tracing" title=" slope tracing"> slope tracing</a> </p> <a href="https://publications.waset.org/abstracts/85765/a-gradient-orientation-based-efficient-linear-interpolation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4162</span> Review on Quaternion Gradient Operator with Marginal and Vector Approaches for Colour Edge Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Ben%20Youssef">Nadia Ben Youssef</a>, <a href="https://publications.waset.org/abstracts/search?q=Aicha%20Bouzid"> Aicha Bouzid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gradient estimation is one of the most fundamental tasks in the field of image processing in general, and more particularly for color images since that the research in color image gradient remains limited. The widely used gradient method is Di Zenzo’s gradient operator, which is based on the measure of squared local contrast of color images. The proposed gradient mechanism, presented in this paper, is based on the principle of the Di Zenzo’s approach using quaternion representation. This edge detector is compared to a marginal approach based on multiscale product of wavelet transform and another vector approach based on quaternion convolution and vector gradient approach. The experimental results indicate that the proposed color gradient operator outperforms marginal approach, however, it is less efficient then the second vector approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gradient" title="gradient">gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20image" title=" color image"> color image</a>, <a href="https://publications.waset.org/abstracts/search?q=quaternion" title=" quaternion"> quaternion</a> </p> <a href="https://publications.waset.org/abstracts/141138/review-on-quaternion-gradient-operator-with-marginal-and-vector-approaches-for-colour-edge-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141138.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">234</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4161</span> Advancing in Cricket Analytics: Novel Approaches for Pitch and Ball Detection Employing OpenCV and YOLOV8</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pratham%20Madnur">Pratham Madnur</a>, <a href="https://publications.waset.org/abstracts/search?q=Prathamkumar%20Shetty"> Prathamkumar Shetty</a>, <a href="https://publications.waset.org/abstracts/search?q=Sneha%20Varur"> Sneha Varur</a>, <a href="https://publications.waset.org/abstracts/search?q=Gouri%20Parashetti"> Gouri Parashetti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to overcome conventional obstacles, this research paper investigates novel approaches for cricket pitch and ball detection that make use of cutting-edge technologies. The research integrates OpenCV for pitch inspection and modifies the YOLOv8 model for cricket ball detection in order to overcome the shortcomings of manual pitch assessment and traditional ball detection techniques. To ensure flexibility in a range of pitch environments, the pitch detection method leverages OpenCV’s color space transformation, contour extraction, and accurate color range defining features. Regarding ball detection, the YOLOv8 model emphasizes the preservation of minor object details to improve accuracy and is specifically trained to the unique properties of cricket balls. The methods are more reliable because of the careful preparation of the datasets, which include novel ball and pitch information. These cutting-edge methods not only improve cricket analytics but also set the stage for flexible methods in more general sports technology applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title="OpenCV">OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title=" YOLOv8"> YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=cricket" title=" cricket"> cricket</a>, <a href="https://publications.waset.org/abstracts/search?q=custom%20dataset" title=" custom dataset"> custom dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=sports" title=" sports"> sports</a> </p> <a href="https://publications.waset.org/abstracts/182020/advancing-in-cricket-analytics-novel-approaches-for-pitch-and-ball-detection-employing-opencv-and-yolov8" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=139">139</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=140">140</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10