CINXE.COM

Search results for: optic disc detection and segmentation

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: optic disc detection and segmentation</title> <meta name="description" content="Search results for: optic disc detection and segmentation"> <meta name="keywords" content="optic disc detection and segmentation"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="optic disc detection and segmentation" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="optic disc detection and segmentation"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 338</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: optic disc detection and segmentation</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">338</span> Traffic Light Detection Using Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaishnavi%20Shivde">Vaishnavi Shivde</a>, <a href="https://publications.waset.org/abstracts/search?q=Shrishti%20Sinha"> Shrishti Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Trapti%20Mishra"> Trapti Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20light%20detection" title="traffic light detection">traffic light detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/137254/traffic-light-detection-using-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">173</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">337</span> A Novel Breast Cancer Detection Algorithm Using Point Region Growing Segmentation and Pseudo-Zernike Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aileen%20F.%20Wang">Aileen F. Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mammography has been one of the most reliable methods for early detection and diagnosis of breast cancer. However, mammography misses about 17% and up to 30% of breast cancers due to the subtle and unstable appearances of breast cancer in their early stages. Recent computer-aided diagnosis (CADx) technology using Zernike moments has improved detection accuracy. However, it has several drawbacks: it uses manual segmentation, Zernike moments are not robust, and it still has a relatively high false negative rate (FNR)–17.6%. This project will focus on the development of a novel breast cancer detection algorithm to automatically segment the breast mass and further reduce FNR. The algorithm consists of automatic segmentation of a single breast mass using Point Region Growing Segmentation, reconstruction of the segmented breast mass using Pseudo-Zernike moments, and classification of the breast mass using the root mean square (RMS). A comparative study among the various algorithms on the segmentation and reconstruction of breast masses was performed on randomly selected mammographic images. The results demonstrated that the newly developed algorithm is the best in terms of accuracy and cost effectiveness. More importantly, the new classifier RMS has the lowest FNR–6%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20aided%20diagnosis" title="computer aided diagnosis">computer aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20region%20growing%20segmentation" title=" point region growing segmentation"> point region growing segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=pseudo-zernike%20moments" title=" pseudo-zernike moments"> pseudo-zernike moments</a>, <a href="https://publications.waset.org/abstracts/search?q=root%20mean%20square" title=" root mean square"> root mean square</a> </p> <a href="https://publications.waset.org/abstracts/10488/a-novel-breast-cancer-detection-algorithm-using-point-region-growing-segmentation-and-pseudo-zernike-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10488.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">453</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">336</span> Training a Neural Network to Segment, Detect and Recognize Numbers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abhisek%20Dash">Abhisek Dash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR" title=" OCR"> OCR</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title=" text detection"> text detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20segmentation" title=" text segmentation"> text segmentation</a> </p> <a href="https://publications.waset.org/abstracts/85788/training-a-neural-network-to-segment-detect-and-recognize-numbers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85788.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">335</span> Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Shankar%20Bharathi">S. Shankar Bharathi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=metallic%20surface" title="metallic surface">metallic surface</a>, <a href="https://publications.waset.org/abstracts/search?q=scratches" title=" scratches"> scratches</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=hausdorff%20dilation%20distance" title=" hausdorff dilation distance"> hausdorff dilation distance</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a> </p> <a href="https://publications.waset.org/abstracts/34958/iterative-segmentation-and-application-of-hausdorff-dilation-distance-in-defect-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">427</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">334</span> Automatic Segmentation of the Clean Speech Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Ben%20Messaoud">M. A. Ben Messaoud</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Bouzid"> A. Bouzid</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Ellouze"> N. Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech Segmentation is the measure of the change point detection for partitioning an input speech signal into regions each of which accords to only one speaker. In this paper, we apply two features based on multi-scale product (MP) of the clean speech, namely the spectral centroid of MP, and the zero crossings rate of MP. We focus on multi-scale product analysis as an important tool for segmentation extraction. The multi-scale product is based on making the product of the speech wavelet transform coefficients at three successive dyadic scales. We have evaluated our method on the Keele database. Experimental results show the effectiveness of our method presenting a good performance. It shows that the two simple features can find word boundaries, and extracted the segments of the clean speech. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiscale%20product" title="multiscale product">multiscale product</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20centroid" title=" spectral centroid"> spectral centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20segmentation" title=" speech segmentation"> speech segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=zero%20crossings%20rate" title=" zero crossings rate"> zero crossings rate</a> </p> <a href="https://publications.waset.org/abstracts/17566/automatic-segmentation-of-the-clean-speech-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">499</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">333</span> Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20R.%20Ramsheeja">R. R. Ramsheeja</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Sreeraj"> R. Sreeraj</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computed%20tomography%20%28CT%29" title="computed tomography (CT)">computed tomography (CT)</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20region%20of%20interest%28ROI%29" title=" multiple region of interest(ROI)"> multiple region of interest(ROI)</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20values" title=" feature values"> feature values</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM%20classification" title=" SVM classification"> SVM classification</a> </p> <a href="https://publications.waset.org/abstracts/18207/diagnosis-and-analysis-of-automated-liver-and-tumor-segmentation-on-ct" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">509</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">332</span> Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Swathi%20Gopakumar">Swathi Gopakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Sruthi%20Krishna"> Sruthi Krishna</a>, <a href="https://publications.waset.org/abstracts/search?q=Shivasubramani%20Krishnamoorthy"> Shivasubramani Krishnamoorthy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anisotropic%20diffusion" title="anisotropic diffusion">anisotropic diffusion</a>, <a href="https://publications.waset.org/abstracts/search?q=breast" title=" breast"> breast</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian" title=" Gaussian"> Gaussian</a>, <a href="https://publications.waset.org/abstracts/search?q=level-set" title=" level-set"> level-set</a>, <a href="https://publications.waset.org/abstracts/search?q=thermograms" title=" thermograms"> thermograms</a> </p> <a href="https://publications.waset.org/abstracts/85030/marker-controlled-level-set-for-segmenting-breast-tumor-from-thermal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85030.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">331</span> Cells Detection and Recognition in Bone Marrow Examination with Deep Learning Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiyin%20He">Shiyin He</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Huang"> Zheng Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, deep learning methods are applied in bio-medical field to detect and count different types of cells in an automatic way instead of manual work in medical practice, specifically in bone marrow examination. The process is mainly composed of two steps, detection and recognition. Mask-Region-Convolutional Neural Networks (Mask-RCNN) was used for detection and image segmentation to extract cells and then Convolutional Neural Networks (CNN), as well as Deep Residual Network (ResNet) was used to classify. Result of cell detection network shows high efficiency to meet application requirements. For the cell recognition network, two networks are compared and the final system is fully applicable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cell%20detection" title="cell detection">cell detection</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20recognition" title=" cell recognition"> cell recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Mask-RCNN" title=" Mask-RCNN"> Mask-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a> </p> <a href="https://publications.waset.org/abstracts/98649/cells-detection-and-recognition-in-bone-marrow-examination-with-deep-learning-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">330</span> Image Segmentation Using 2-D Histogram in RGB Color Space in Digital Libraries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Asnaoui%20Khalid">El Asnaoui Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Aksasse%20Brahim"> Aksasse Brahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ouanan%20Mohammed"> Ouanan Mohammed </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=hierarchical%20analysis" title=" hierarchical analysis"> hierarchical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=2-D%20histogram" title=" 2-D histogram"> 2-D histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/42096/image-segmentation-using-2-d-histogram-in-rgb-color-space-in-digital-libraries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">329</span> Performance Evaluation of Various Segmentation Techniques on MRI of Brain Tissue</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.V.%20Suryawanshi">U.V. Suryawanshi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.S.%20Chowhan"> S.S. Chowhan</a>, <a href="https://publications.waset.org/abstracts/search?q=U.V%20Kulkarni"> U.V Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accuracy of segmentation methods is of great importance in brain image analysis. Tissue classification in Magnetic Resonance brain images (MRI) is an important issue in the analysis of several brain dementias. This paper portraits performance of segmentation techniques that are used on Brain MRI. A large variety of algorithms for segmentation of Brain MRI has been developed. The objective of this paper is to perform a segmentation process on MR images of the human brain, using Fuzzy c-means (FCM), Kernel based Fuzzy c-means clustering (KFCM), Spatial Fuzzy c-means (SFCM) and Improved Fuzzy c-means (IFCM). The review covers imaging modalities, MRI and methods for noise reduction and segmentation approaches. All methods are applied on MRI brain images which are degraded by salt-pepper noise demonstrate that the IFCM algorithm performs more robust to noise than the standard FCM algorithm. We conclude with a discussion on the trend of future research in brain segmentation and changing norms in IFCM for better results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=preprocessing" title=" preprocessing"> preprocessing</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI" title=" MRI"> MRI</a>, <a href="https://publications.waset.org/abstracts/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/abstracts/search?q=KFCM" title=" KFCM"> KFCM</a>, <a href="https://publications.waset.org/abstracts/search?q=SFCM" title=" SFCM"> SFCM</a>, <a href="https://publications.waset.org/abstracts/search?q=IFCM" title=" IFCM"> IFCM</a> </p> <a href="https://publications.waset.org/abstracts/12406/performance-evaluation-of-various-segmentation-techniques-on-mri-of-brain-tissue" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12406.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">331</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">328</span> Automated Digital Mammogram Segmentation Using Dispersed Region Growing and Pectoral Muscle Sliding Window Algorithm </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ayush%20Shrivastava">Ayush Shrivastava</a>, <a href="https://publications.waset.org/abstracts/search?q=Arpit%20Chaudhary"> Arpit Chaudhary</a>, <a href="https://publications.waset.org/abstracts/search?q=Devang%20Kulshreshtha"> Devang Kulshreshtha</a>, <a href="https://publications.waset.org/abstracts/search?q=Vibhav%20Prakash%20Singh"> Vibhav Prakash Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajeev%20Srivastava"> Rajeev Srivastava</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Early diagnosis of breast cancer can improve the survival rate by detecting cancer at an early stage. Breast region segmentation is an essential step in the analysis of digital mammograms. Accurate image segmentation leads to better detection of cancer. It aims at separating out Region of Interest (ROI) from rest of the image. The procedure begins with removal of labels, annotations and tags from the mammographic image using morphological opening method. Pectoral Muscle Sliding Window Algorithm (PMSWA) is used for removal of pectoral muscle from mammograms which is necessary as the intensity values of pectoral muscles are similar to that of ROI which makes it difficult to separate out. After removing the pectoral muscle, Dispersed Region Growing Algorithm (DRGA) is used for segmentation of mammogram which disperses seeds in different regions instead of a single bright region. To demonstrate the validity of our segmentation method, 322 mammographic images from Mammographic Image Analysis Society (MIAS) database are used. The dataset contains medio-lateral oblique (MLO) view of mammograms. Experimental results on MIAS dataset show the effectiveness of our proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CAD" title="CAD">CAD</a>, <a href="https://publications.waset.org/abstracts/search?q=dispersed%20region%20growing%20algorithm%20%28DRGA%29" title=" dispersed region growing algorithm (DRGA)"> dispersed region growing algorithm (DRGA)</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a>, <a href="https://publications.waset.org/abstracts/search?q=pectoral%20muscle%20sliding%20window%20algorithm%20%28PMSWA%29" title=" pectoral muscle sliding window algorithm (PMSWA)"> pectoral muscle sliding window algorithm (PMSWA)</a> </p> <a href="https://publications.waset.org/abstracts/69020/automated-digital-mammogram-segmentation-using-dispersed-region-growing-and-pectoral-muscle-sliding-window-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">312</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">327</span> Retina Registration for Biometrics Based on Characterization of Retinal Feature Points</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nougrara%20Zineb">Nougrara Zineb</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The unique structure of the blood vessels in the retina has been used for biometric identification. The retina blood vessel pattern is a unique pattern in each individual and it is almost impossible to forge that pattern in a false individual. The retina biometrics’ advantages include high distinctiveness, universality, and stability overtime of the blood vessel pattern. Once the creases have been extracted from the images, a registration stage is necessary, since the position of the retinal vessel structure could change between acquisitions due to the movements of the eye. Image registration consists of following steps: Feature detection, feature matching, transform model estimation and image resembling and transformation. In this paper, we present an algorithm of registration; it is based on the characterization of retinal feature points. For experiments, retinal images from the DRIVE database have been tested. The proposed methodology achieves good results for registration in general. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fovea" title="fovea">fovea</a>, <a href="https://publications.waset.org/abstracts/search?q=optic%20disc" title=" optic disc"> optic disc</a>, <a href="https://publications.waset.org/abstracts/search?q=registration" title=" registration"> registration</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a> </p> <a href="https://publications.waset.org/abstracts/72438/retina-registration-for-biometrics-based-on-characterization-of-retinal-feature-points" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">326</span> The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nassima%20Noufail">Nassima Noufail</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Bouhali"> Sara Bouhali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20segmentation" title="video segmentation">video segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20detection" title=" action detection"> action detection</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=Kmeans" title=" Kmeans"> Kmeans</a>, <a href="https://publications.waset.org/abstracts/search?q=C3D" title=" C3D"> C3D</a> </p> <a href="https://publications.waset.org/abstracts/162586/the-application-of-video-segmentation-methods-for-the-purpose-of-action-detection-in-videos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162586.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">325</span> Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Biran">A. Biran</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sobhe%20Bidari"> P. Sobhe Bidari</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Raahemifar"> K. Raahemifar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus" title=" fundus"> fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=CHT" title=" CHT"> CHT</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=hemorrhages" title=" hemorrhages"> hemorrhages</a> </p> <a href="https://publications.waset.org/abstracts/52591/automatic-method-for-exudates-and-hemorrhages-detection-from-fundus-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">324</span> Humeral Head and Scapula Detection in Proton Density Weighted Magnetic Resonance Images Using YOLOv8</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aysun%20Sezer">Aysun Sezer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Magnetic Resonance Imaging (MRI) is one of the advanced diagnostic tools for evaluating shoulder pathologies. Proton Density (PD)-weighted MRI sequences prove highly effective in detecting edema. However, they are deficient in the anatomical identification of bones due to a trauma-induced decrease in signal-to-noise ratio and blur in the traumatized cortices. Computer-based diagnostic systems require precise segmentation, identification, and localization of anatomical regions in medical imagery. Deep learning-based object detection algorithms exhibit remarkable proficiency in real-time object identification and localization. In this study, the YOLOv8 model was employed to detect humeral head and scapular regions in 665 axial PD-weighted MR images. The YOLOv8 configuration achieved an overall success rate of 99.60% and 89.90% for detecting the humeral head and scapula, respectively, with an intersection over union (IoU) of 0.5. Our findings indicate a significant promise of employing YOLOv8-based detection for the humerus and scapula regions, particularly in the context of PD-weighted images affected by both noise and intensity inhomogeneity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title="YOLOv8">YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=humerus" title=" humerus"> humerus</a>, <a href="https://publications.waset.org/abstracts/search?q=scapula" title=" scapula"> scapula</a>, <a href="https://publications.waset.org/abstracts/search?q=IRM" title=" IRM"> IRM</a> </p> <a href="https://publications.waset.org/abstracts/175663/humeral-head-and-scapula-detection-in-proton-density-weighted-magnetic-resonance-images-using-yolov8" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175663.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">323</span> Segmentation of Korean Words on Korean Road Signs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lae-Jeong%20Park">Lae-Jeong Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyusoo%20Chung"> Kyusoo Chung</a>, <a href="https://publications.waset.org/abstracts/search?q=Jungho%20Moon"> Jungho Moon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20signs" title=" road signs"> road signs</a>, <a href="https://publications.waset.org/abstracts/search?q=characters" title=" characters"> characters</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/42000/segmentation-of-korean-words-on-korean-road-signs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42000.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">444</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">322</span> Numerical Simulation of Fiber Bragg Grating Spectrum for Mode-І Delamination Detection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=O.%20Hassoon">O. Hassoon</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Tarfoui"> M. Tarfoui</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20El%20Malk"> A. El Malk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fiber Bragg optic sensor embedded in composite material to detect and monitor the damage which is occur in composite structure. In this paper we deal with the mode-Ι delamination to determine the resistance of material to crack propagation, and use the coupling mode theory and T-matrix method to simulating the FBGs spectrum for both uniform and non-uniform strain distribution. The double cantilever beam test which is modeling in FEM to determine the Longitudinal strain, there are two models which are used, the first is the global half model, and the second the sub-model to represent the FBGs with refine mesh. This method can simulate the damage in the composite structure and converting the strain to wavelength shifting of the FBG spectrum. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fiber%20bragg%20grating" title="fiber bragg grating">fiber bragg grating</a>, <a href="https://publications.waset.org/abstracts/search?q=delamination%20detection" title=" delamination detection"> delamination detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DCB" title=" DCB"> DCB</a>, <a href="https://publications.waset.org/abstracts/search?q=FBG%20spectrum" title=" FBG spectrum"> FBG spectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=structure%20health%20monitoring" title=" structure health monitoring "> structure health monitoring </a> </p> <a href="https://publications.waset.org/abstracts/14913/numerical-simulation-of-fiber-bragg-grating-spectrum-for-mode-i-delamination-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">361</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">321</span> Intrusion Detection Techniques in NaaS in the Cloud: A Review </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashid%20Mahmood">Rashid Mahmood</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=IDS" title="IDS">IDS</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud" title=" cloud"> cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=naas" title=" naas"> naas</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a> </p> <a href="https://publications.waset.org/abstracts/36475/intrusion-detection-techniques-in-naas-in-the-cloud-a-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">320</span> Segmentation of Liver Using Random Forest Classifier </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gajendra%20Kumar%20%20Mourya">Gajendra Kumar Mourya</a>, <a href="https://publications.waset.org/abstracts/search?q=Dinesh%20%20Bhatia"> Dinesh Bhatia</a>, <a href="https://publications.waset.org/abstracts/search?q=Akash%20%20Handique"> Akash Handique</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunita%20Warjri"> Sunita Warjri</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Achaab%20Amir"> Syed Achaab Amir </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20validation" title=" image validation"> image validation</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/77535/segmentation-of-liver-using-random-forest-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">319</span> Real-Time Detection of Space Manipulator Self-Collision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Xiaodong">Zhang Xiaodong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tang%20Zixin"> Tang Zixin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liu%20Xin"> Liu Xin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder enveloping surface, and then the detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space%20manipulator" title="space manipulator">space manipulator</a>, <a href="https://publications.waset.org/abstracts/search?q=collision%20detection" title=" collision detection"> collision detection</a>, <a href="https://publications.waset.org/abstracts/search?q=self-collision" title=" self-collision"> self-collision</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20real-time%20collision%20detection" title=" the real-time collision detection"> the real-time collision detection</a> </p> <a href="https://publications.waset.org/abstracts/23258/real-time-detection-of-space-manipulator-self-collision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23258.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">318</span> mKDNAD: A Network Flow Anomaly Detection Method Based On Multi-teacher Knowledge Distillation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Yang">Yang Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Liu"> Dan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Anomaly detection models for network flow based on machine learning have poor detection performance under extremely unbalanced training data conditions and also have slow detection speed and large resource consumption when deploying on network edge devices. Embedding multi-teacher knowledge distillation (mKD) in anomaly detection can transfer knowledge from multiple teacher models to a single model. Inspired by this, we proposed a state-of-the-art model, mKDNAD, to improve detection performance. mKDNAD mine and integrate the knowledge of one-dimensional sequence and two-dimensional image implicit in network flow to improve the detection accuracy of small sample classes. The multi-teacher knowledge distillation method guides the train of the student model, thus speeding up the model's detection speed and reducing the number of model parameters. Experiments in the CICIDS2017 dataset verify the improvements of our method in the detection speed and the detection accuracy in dealing with the small sample classes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=network%20flow%20anomaly%20detection%20%28NAD%29" title="network flow anomaly detection (NAD)">network flow anomaly detection (NAD)</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-teacher%20knowledge%20distillation" title=" multi-teacher knowledge distillation"> multi-teacher knowledge distillation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/156811/mkdnad-a-network-flow-anomaly-detection-method-based-on-multi-teacher-knowledge-distillation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156811.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">317</span> Optimized Road Lane Detection Through a Combined Canny Edge Detection, Hough Transform, and Scaleable Region Masking Toward Autonomous Driving</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samane%20Sharifi%20Monfared">Samane Sharifi Monfared</a>, <a href="https://publications.waset.org/abstracts/search?q=Lavdie%20Rada"> Lavdie Rada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, autonomous vehicles are developing rapidly toward facilitating human car driving. One of the main issues is road lane detection for a suitable guidance direction and car accident prevention. This paper aims to improve and optimize road line detection based on a combination of camera calibration, the Hough transform, and Canny edge detection. The video processing is implemented using the Open CV library with the novelty of having a scale able region masking. The aim of the study is to introduce automatic road lane detection techniques with the user’s minimum manual intervention. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hough%20transform" title="hough transform">hough transform</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection" title=" canny edge detection"> canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=optimisation" title=" optimisation"> optimisation</a>, <a href="https://publications.waset.org/abstracts/search?q=scaleable%20masking" title=" scaleable masking"> scaleable masking</a>, <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title=" camera calibration"> camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=improving%20the%20quality%20of%20image" title=" improving the quality of image"> improving the quality of image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/156139/optimized-road-lane-detection-through-a-combined-canny-edge-detection-hough-transform-and-scaleable-region-masking-toward-autonomous-driving" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156139.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">316</span> Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdulkader%20Helwan">Abdulkader Helwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rheumatoid%20arthritis" title="rheumatoid arthritis">rheumatoid arthritis</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20identification" title=" intelligent identification"> intelligent identification</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20classifier" title=" neural classifier"> neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=backpropoagation" title=" backpropoagation"> backpropoagation</a> </p> <a href="https://publications.waset.org/abstracts/26123/intelligent-rheumatoid-arthritis-identification-system-based-image-processing-and-neural-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26123.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">532</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">315</span> Fiber-Optic Sensors for Hydrogen Peroxide Vapor Measurement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Akbari%20Khorami">H. Akbari Khorami</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Wild"> P. Wild</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Djilali"> N. Djilali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper reports on the response of a fiber-optic sensing probe to small concentrations of hydrogen peroxide (H2O2) vapor at room temperature. H2O2 has extensive applications in industrial and medical environments. Conversely, H2O2 can be a health hazard by itself. For example, H2O2 induces cellular damage in human cells and its presence can be used to diagnose illnesses such as asthma and human breast cancer. Hence, development of reliable H2O2 sensor is of vital importance to detect and measure this species. Ferric ferrocyanide, referred to as Prussian blue (PB), was deposited on the tip of a multimode optical fiber through the single source precursor technique and served as an indicator of H2O2 in a spectroscopic manner. Sensing tests were performed in H2O2-H2O vapor mixtures with different concentrations of H2O2. The results of sensing tests show the sensor is able to detect H2O2 concentrations in the range of 50.6 ppm to 229.5 ppm. Furthermore, the sensor response to H2O2 concentrations is linear in a log-log scale with the adjacent R-square of 0.93. This sensing behavior allows us to detect and quantify the concentration of H2O2 in the vapor phase. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chemical%20deposition" title="chemical deposition">chemical deposition</a>, <a href="https://publications.waset.org/abstracts/search?q=fiber-optic%20sensor" title=" fiber-optic sensor"> fiber-optic sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=hydrogen%20peroxide%20vapor" title=" hydrogen peroxide vapor"> hydrogen peroxide vapor</a>, <a href="https://publications.waset.org/abstracts/search?q=prussian%20blue" title=" prussian blue"> prussian blue</a> </p> <a href="https://publications.waset.org/abstracts/35449/fiber-optic-sensors-for-hydrogen-peroxide-vapor-measurement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">358</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">314</span> Count of Trees in East Africa with Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nubwimana%20Rachel">Nubwimana Rachel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mugabowindekwe%20Maurice"> Mugabowindekwe Maurice</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20counting" title=" tree counting"> tree counting</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/177935/count-of-trees-in-east-africa-with-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177935.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">313</span> Evaluating Performance of an Anomaly Detection Module with Artificial Neural Network Implementation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edward%20Guill%C3%A9n">Edward Guillén</a>, <a href="https://publications.waset.org/abstracts/search?q=Jhordany%20Rodriguez"> Jhordany Rodriguez</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20P%C3%A1ez"> Rafael Páez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Anomaly detection techniques have been focused on two main components: data extraction and selection and the second one is the analysis performed over the obtained data. The goal of this paper is to analyze the influence that each of these components has over the system performance by evaluating detection over network scenarios with different setups. The independent variables are as follows: the number of system inputs, the way the inputs are codified and the complexity of the analysis techniques. For the analysis, some approaches of artificial neural networks are implemented with different number of layers. The obtained results show the influence that each of these variables has in the system performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=network%20intrusion%20detection" title="network intrusion detection">network intrusion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection%20module" title="anomaly detection module">anomaly detection module</a> </p> <a href="https://publications.waset.org/abstracts/2047/evaluating-performance-of-an-anomaly-detection-module-with-artificial-neural-network-implementation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">312</span> A Highly Sensitive Dip Strip for Detection of Phosphate in Water</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hojat%20Heidari-Bafroui">Hojat Heidari-Bafroui</a>, <a href="https://publications.waset.org/abstracts/search?q=Amer%20Charbaji"> Amer Charbaji</a>, <a href="https://publications.waset.org/abstracts/search?q=Constantine%20Anagnostopoulos"> Constantine Anagnostopoulos</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Faghri"> Mohammad Faghri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Phosphorus is an essential nutrient for plant life which is most frequently found as phosphate in water. Once phosphate is found in abundance in surface water, a series of adverse effects on an ecosystem can be initiated. Therefore, a portable and reliable method is needed to monitor the phosphate concentrations in the field. In this paper, an inexpensive dip strip device with the ascorbic acid/antimony reagent dried on blotting paper along with wet chemistry is developed for the detection of low concentrations of phosphate in water. Ammonium molybdate and sulfuric acid are separately stored in liquid form so as to improve significantly the lifetime of the device and enhance the reproducibility of the device&rsquo;s performance. The limit of detection and quantification for the optimized device are 0.134 ppm and 0.472 ppm for phosphate in water, respectively. The device&rsquo;s shelf life, storage conditions, and limit of detection are superior to what has been previously reported for the paper-based phosphate detection devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phosphate%20detection" title="phosphate detection">phosphate detection</a>, <a href="https://publications.waset.org/abstracts/search?q=paper-based%20device" title=" paper-based device"> paper-based device</a>, <a href="https://publications.waset.org/abstracts/search?q=molybdenum%20blue%20method" title=" molybdenum blue method"> molybdenum blue method</a>, <a href="https://publications.waset.org/abstracts/search?q=colorimetric%20assay" title=" colorimetric assay"> colorimetric assay</a> </p> <a href="https://publications.waset.org/abstracts/134960/a-highly-sensitive-dip-strip-for-detection-of-phosphate-in-water" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134960.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">311</span> Adaptive Nonparametric Approach for Guaranteed Real-Time Detection of Targeted Signals in Multichannel Monitoring Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An adaptive nonparametric method is proposed for stable real-time detection of seismoacoustic sources in multichannel C-OTDR systems with a significant number of channels. This method guarantees given upper boundaries for probabilities of Type I and Type II errors. Properties of the proposed method are rigorously proved. The results of practical applications of the proposed method in a real C-OTDR-system are presented in this report. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=guaranteed%20detection" title="guaranteed detection">guaranteed detection</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20monitoring%20systems" title=" multichannel monitoring systems"> multichannel monitoring systems</a>, <a href="https://publications.waset.org/abstracts/search?q=change%20point" title=" change point"> change point</a>, <a href="https://publications.waset.org/abstracts/search?q=interval%20estimation" title=" interval estimation"> interval estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20detection" title=" adaptive detection"> adaptive detection</a> </p> <a href="https://publications.waset.org/abstracts/21976/adaptive-nonparametric-approach-for-guaranteed-real-time-detection-of-targeted-signals-in-multichannel-monitoring-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21976.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">310</span> Intrusion Detection Using Dual Artificial Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rana%20I.%20Abdulghani">Rana I. Abdulghani</a>, <a href="https://publications.waset.org/abstracts/search?q=Amera%20I.%20Melhum"> Amera I. Melhum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the abnormal growth of the usage of computers over networks and under the consideration or agreement of most of the computer security experts who said that the goal of building a secure system is never achieved effectively, all these points led to the design of the intrusion detection systems(IDS). This research adopts a comparison between two techniques for network intrusion detection, The first one used the (Particles Swarm Optimization) that fall within the field (Swarm Intelligence). In this Act, the algorithm Enhanced for the purpose of obtaining the minimum error rate by amending the cluster centers when better fitness function is found through the training stages. Results show that this modification gives more efficient exploration of the original algorithm. The second algorithm used a (Back propagation NN) algorithm. Finally a comparison between the results of two methods used were based on (NSL_KDD) data sets for the construction and evaluation of intrusion detection systems. This research is only interested in clustering the two categories (Normal and Abnormal) for the given connection records. Practices experiments result in intrude detection rate (99.183818%) for EPSO and intrude detection rate (69.446416%) for BP neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=IDS" title="IDS">IDS</a>, <a href="https://publications.waset.org/abstracts/search?q=SI" title=" SI"> SI</a>, <a href="https://publications.waset.org/abstracts/search?q=BP" title=" BP"> BP</a>, <a href="https://publications.waset.org/abstracts/search?q=NSL_KDD" title=" NSL_KDD"> NSL_KDD</a>, <a href="https://publications.waset.org/abstracts/search?q=PSO" title=" PSO"> PSO</a> </p> <a href="https://publications.waset.org/abstracts/26515/intrusion-detection-using-dual-artificial-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26515.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">309</span> Study of the Tribological Behavior of a Pin on Disc Type of Contact</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Djebali">S. Djebali</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Larbi"> S. Larbi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Bilek"> A. Bilek </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present work aims at contributing to the study of the complex phenomenon of wear of pin on disc contact in dry sliding friction between two material couples (bronze/steel and unsaturated polyester virgin and charged with graphite powder/steel). The work consists of the determination of the coefficient of friction, the study of the influence of the tribological parameters on this coefficient and the determination of the mass loss and the wear rate of the pin. This study is also widened to the highlighting of the influence of the addition of graphite powder on the tribological properties of the polymer constituting the pin. The experiments are carried out on a pin-disc type tribometer that we have designed and manufactured. Tests are conducted according to the standards DIN 50321 and DIN EN 50324. The discs are made of annealed XC48 steel and quenched and tempered XC48 steel. The main results are described here after. The increase of the normal load and the sliding speed causes the increase of the friction coefficient, whereas the increase of the percentage of graphite and the hardness of the disc surface contributes to its reduction. The mass loss also increases with the normal load. The influence of the normal load on the friction coefficient is more significant than that of the sliding speed. The effect of the sliding speed decreases for large speed values. The increase of the amount of graphite powder leads to a decrease of the coefficient of friction, the mass loss and the wear rate. The addition of graphite to the UP resin is beneficial; it plays the role of solid lubricant. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bronze" title="bronze">bronze</a>, <a href="https://publications.waset.org/abstracts/search?q=friction%20coefficient" title=" friction coefficient"> friction coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=graphite" title=" graphite"> graphite</a>, <a href="https://publications.waset.org/abstracts/search?q=mass%20loss" title=" mass loss"> mass loss</a>, <a href="https://publications.waset.org/abstracts/search?q=polyester" title=" polyester"> polyester</a>, <a href="https://publications.waset.org/abstracts/search?q=steel" title=" steel"> steel</a>, <a href="https://publications.waset.org/abstracts/search?q=wear%20rate" title=" wear rate"> wear rate</a> </p> <a href="https://publications.waset.org/abstracts/49238/study-of-the-tribological-behavior-of-a-pin-on-disc-type-of-contact" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49238.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">345</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=10">10</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=11">11</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=12">12</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10