CINXE.COM
Search results for: grayscale
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: grayscale</title> <meta name="description" content="Search results for: grayscale"> <meta name="keywords" content="grayscale"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="grayscale" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="grayscale"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 31</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: grayscale</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siraa%20Ben%20Ftima">Siraa Ben Ftima</a>, <a href="https://publications.waset.org/abstracts/search?q=Mourad%20Talbi"> Mourad Talbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tahar%20Ezzedine"> Tahar Ezzedine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lifting%20wavelet%20transform%20%28LWT%29" title="lifting wavelet transform (LWT)">lifting wavelet transform (LWT)</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-space%20vectorial%20decomposition" title=" sub-space vectorial decomposition"> sub-space vectorial decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=secure" title=" secure"> secure</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20watermarking" title=" image watermarking"> image watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=watermark" title=" watermark"> watermark</a> </p> <a href="https://publications.waset.org/abstracts/70998/lifting-wavelet-transform-and-singular-values-decomposition-for-secure-image-watermarking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">276</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> Secure Image Encryption via Enhanced Fractional Order Chaotic Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ismail%20Haddad">Ismail Haddad</a>, <a href="https://publications.waset.org/abstracts/search?q=Djamel%20Herbadji"> Djamel Herbadji</a>, <a href="https://publications.waset.org/abstracts/search?q=Aissa%20Belmeguenai"> Aissa Belmeguenai</a>, <a href="https://publications.waset.org/abstracts/search?q=Selma%20Boumerdassi"> Selma Boumerdassi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> in this paper, we provide a novel approach for image encryption that employs the Fibonacci matrix and an enhanced fractional order chaotic map. The enhanced map overcomes the drawbacks of the classical map, especially the limited chaotic range and non-uniform distribution of chaotic sequences, resulting in a larger encryption key space. As a result, this strategy improves the encryption system's security. Our experimental results demonstrate that our proposed algorithm effectively encrypts grayscale images with exceptional efficiency. Furthermore, our technique is resistant to a wide range of potential attacks, including statistical and entropy attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=logistic%20map" title=" logistic map"> logistic map</a>, <a href="https://publications.waset.org/abstracts/search?q=fibonacci%20matrix" title=" fibonacci matrix"> fibonacci matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=grayscale%20images" title=" grayscale images"> grayscale images</a> </p> <a href="https://publications.waset.org/abstracts/167150/secure-image-encryption-via-enhanced-fractional-order-chaotic-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167150.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Donatella%20Giuliani">Donatella Giuliani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20images" title="clustering images">clustering images</a>, <a href="https://publications.waset.org/abstracts/search?q=firefly%20algorithm" title=" firefly algorithm"> firefly algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=meta%20heuristic%20algorithm" title=" meta heuristic algorithm"> meta heuristic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a> </p> <a href="https://publications.waset.org/abstracts/79553/a-segmentation-method-for-grayscale-images-based-on-the-firefly-algorithm-and-the-gaussian-mixture-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79553.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">217</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extreme%20learning" title="extreme learning">extreme learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a> </p> <a href="https://publications.waset.org/abstracts/112384/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> 3D Images Representation to Provide Information on the Type of Castella Beams Hole</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cut%20Maisyarah%20Karyati">Cut Maisyarah Karyati</a>, <a href="https://publications.waset.org/abstracts/search?q=Aries%20Muslim"> Aries Muslim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sulardi"> Sulardi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital image processing techniques to obtain detailed information from an image have been used in various fields, including in civil engineering, where the use of solid beam profiles in buildings and bridges has often been encountered since the early development of beams. Along with this development, the founded castellated beam profiles began to be more diverse in shape, such as the shape of a hexagon, triangle, pentagon, circle, ellipse and oval that could be a practical solution in optimizing a construction because of its characteristics. The purpose of this research is to create a computer application to edge detect the profile of various shapes of the castella beams hole. The digital image segmentation method has been used to obtain the grayscale images and represented in 2D and 3D formats. This application has been successfully made according to the desired function, which is to provide information on the type of castella beam hole. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20image" title="digital image">digital image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=grayscale" title=" grayscale"> grayscale</a>, <a href="https://publications.waset.org/abstracts/search?q=castella%20beams" title=" castella beams"> castella beams</a> </p> <a href="https://publications.waset.org/abstracts/143838/3d-images-representation-to-provide-information-on-the-type-of-castella-beams-hole" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143838.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=L.%20Hamsaveni"> L. Hamsaveni</a>, <a href="https://publications.waset.org/abstracts/search?q=Navya%20Prakash"> Navya Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresha"> Suresha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=grayscale%20image%20format" title="grayscale image format">grayscale image format</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusing" title=" image fusing"> image fusing</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20image%20format" title=" RGB image format"> RGB image format</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%20detection" title=" SURF detection"> SURF detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YCbCr%20image%20format" title=" YCbCr image format"> YCbCr image format</a> </p> <a href="https://publications.waset.org/abstracts/64187/degraded-document-analysis-and-extraction-of-original-text-document-an-approach-without-optical-character-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">377</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Texture Analysis of Grayscale Co-Occurrence Matrix on Mammographic Indexed Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sushma">S. Sushma</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Balasubramanian"> S. Balasubramanian</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20C.%20Latha"> K. C. Latha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The mammographic image of breast cancer compressed and synthesized to get co-efficient values which will be converted (5x5) matrix to get ROI image where we get the highest value of effected region and with the same ideology the technique has been extended to differentiate between Calcification and normal cell image using mean value derived from 5x5 matrix values <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20analysis" title="texture analysis">texture analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=mammographic%20image" title=" mammographic image"> mammographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=partitioned%20gray%20scale%20co-oocurance%20matrix" title=" partitioned gray scale co-oocurance matrix"> partitioned gray scale co-oocurance matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=co-efficient" title=" co-efficient "> co-efficient </a> </p> <a href="https://publications.waset.org/abstracts/17516/texture-analysis-of-grayscale-co-occurrence-matrix-on-mammographic-indexed-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">533</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seynabou%20Toure">Seynabou Toure</a>, <a href="https://publications.waset.org/abstracts/search?q=Oumar%20Diop"> Oumar Diop</a>, <a href="https://publications.waset.org/abstracts/search?q=Kidiyo%20Kpalma"> Kidiyo Kpalma</a>, <a href="https://publications.waset.org/abstracts/search?q=Amadou%20S.%20Maiga"> Amadou S. Maiga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=coastline" title=" coastline"> coastline</a>, <a href="https://publications.waset.org/abstracts/search?q=color" title=" color"> color</a>, <a href="https://publications.waset.org/abstracts/search?q=sea-land%20segmentation" title=" sea-land segmentation"> sea-land segmentation</a> </p> <a href="https://publications.waset.org/abstracts/84598/best-performing-color-space-for-land-sea-segmentation-using-wavelet-transform-color-texture-features-and-fusion-of-over-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">247</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marcela%20De%20Oliveira">Marcela De Oliveira</a>, <a href="https://publications.waset.org/abstracts/search?q=Marina%20P.%20Da%20Silva"> Marina P. Da Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20C.%20G.%20Da%20Rocha"> Fernando C. G. Da Rocha</a>, <a href="https://publications.waset.org/abstracts/search?q=Jorge%20M.%20Santos"> Jorge M. Santos</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaime%20S.%20Cardoso"> Jaime S. Cardoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20N.%20Lisboa-Filho"> Paulo N. Lisboa-Filho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20volume" title="brain volume">brain volume</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetic%20resonance%20imaging" title=" magnetic resonance imaging"> magnetic resonance imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20sclerosis" title=" multiple sclerosis"> multiple sclerosis</a>, <a href="https://publications.waset.org/abstracts/search?q=skull%20stripper" title=" skull stripper"> skull stripper</a> </p> <a href="https://publications.waset.org/abstracts/127935/skull-extraction-for-quantification-of-brain-volume-in-magnetic-resonance-imaging-of-multiple-sclerosis-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127935.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Control of Belts for Classification of Geometric Figures by Artificial Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20Sebastian%20Huertas%20Piedrahita">Juan Sebastian Huertas Piedrahita</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaime%20Arturo%20Lopez%20Duque"> Jaime Arturo Lopez Duque</a>, <a href="https://publications.waset.org/abstracts/search?q=Eduardo%20Luis%20Perez%20Londo%C3%B1o"> Eduardo Luis Perez Londoño</a>, <a href="https://publications.waset.org/abstracts/search?q=Juli%C3%A1n%20S.%20Rodr%C3%ADguez"> Julián S. Rodríguez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20vision" title=" artificial vision"> artificial vision</a>, <a href="https://publications.waset.org/abstracts/search?q=binarized" title=" binarized"> binarized</a>, <a href="https://publications.waset.org/abstracts/search?q=grayscale" title=" grayscale"> grayscale</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB" title=" RGB "> RGB </a> </p> <a href="https://publications.waset.org/abstracts/32096/control-of-belts-for-classification-of-geometric-figures-by-artificial-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Intelligent Grading System of Apple Using Neural Network Arbitration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ebenezer%20Obaloluwa%20Olaniyi">Ebenezer Obaloluwa Olaniyi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an intelligent system has been designed to grade apple based on either its defective or healthy for production in food processing. This paper is segmented into two different phase. In the first phase, the image processing techniques were employed to extract the necessary features required in the apple. These techniques include grayscale conversion, segmentation where a threshold value is chosen to separate the foreground of the images from the background. Then edge detection was also employed to bring out the features in the images. These extracted features were then fed into the neural network in the second phase of the paper. The second phase is a classification phase where neural network employed to classify the defective apple from the healthy apple. In this phase, the network was trained with back propagation and tested with feed forward network. The recognition rate obtained from our system shows that our system is more accurate and faster as compared with previous work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=apple" title=" apple"> apple</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20system" title=" intelligent system"> intelligent system</a> </p> <a href="https://publications.waset.org/abstracts/43875/intelligent-grading-system-of-apple-using-neural-network-arbitration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43875.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edgardo%20V.%20Gubatanga%20Jr.">Edgardo V. Gubatanga Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joshua%20Salvacion"> Mark Joshua Salvacion</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20LiDAR" title="aerial LiDAR">aerial LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=colorization" title=" colorization"> colorization</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20images" title=" intensity images"> intensity images</a> </p> <a href="https://publications.waset.org/abstracts/94116/application-of-deep-learning-in-colorization-of-lidar-derived-intensity-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94116.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Enhancing the Bionic Eye: A Real-time Image Optimization Framework to Encode Color and Spatial Information Into Retinal Prostheses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=William%20Huang">William Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal prostheses are currently limited to low resolution grayscale images that lack color and spatial information. This study develops a novel real-time image optimization framework and tools to encode maximum information to the prostheses which are constrained by the number of electrodes. One key idea is to localize main objects in images while reducing unnecessary background noise through region-contrast saliency maps. A novel color depth mapping technique was developed through MiniBatchKmeans clustering and color space selection. The resulting image was downsampled using bicubic interpolation to reduce image size while preserving color quality. In comparison to current schemes, the proposed framework demonstrated better visual quality in tested images. The use of the region-contrast saliency map showed improvements in efficacy up to 30%. Finally, the computational speed of this algorithm is less than 380 ms on tested cases, making real-time retinal prostheses feasible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20implants" title="retinal implants">retinal implants</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20processing%20unit" title=" virtual processing unit"> virtual processing unit</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=saliency%20maps" title=" saliency maps"> saliency maps</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quantization" title=" color quantization"> color quantization</a> </p> <a href="https://publications.waset.org/abstracts/147972/enhancing-the-bionic-eye-a-real-time-image-optimization-framework-to-encode-color-and-spatial-information-into-retinal-prostheses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147972.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Implementation of Edge Detection Based on Autofluorescence Endoscopic Image of Field Programmable Gate Array</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hao%20Cheng">Hao Cheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhiwu%20Wang"> Zhiwu Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Guozheng%20Yan"> Guozheng Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Pingping%20Jiang"> Pingping Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Shijia%20Qin"> Shijia Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuai%20Kuang"> Shuai Kuang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autofluorescence Imaging (AFI) is a technology for detecting early carcinogenesis of the gastrointestinal tract in recent years. Compared with traditional white light endoscopy (WLE), this technology greatly improves the detection accuracy of early carcinogenesis, because the colors of normal tissues are different from cancerous tissues. Thus, edge detection can distinguish them in grayscale images. In this paper, based on the traditional Sobel edge detection method, optimization has been performed on this method which considers the environment of the gastrointestinal, including adaptive threshold and morphological processing. All of the processes are implemented on our self-designed system based on the image sensor OV6930 and Field Programmable Gate Array (FPGA), The system can capture the gastrointestinal image taken by the lens in real time and detect edges. The final experiments verified the feasibility of our system and the effectiveness and accuracy of the edge detection algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AFI" title="AFI">AFI</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20threshold" title=" adaptive threshold"> adaptive threshold</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20processing" title=" morphological processing"> morphological processing</a>, <a href="https://publications.waset.org/abstracts/search?q=OV6930" title=" OV6930"> OV6930</a>, <a href="https://publications.waset.org/abstracts/search?q=FPGA" title=" FPGA"> FPGA</a> </p> <a href="https://publications.waset.org/abstracts/102685/implementation-of-edge-detection-based-on-autofluorescence-endoscopic-image-of-field-programmable-gate-array" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102685.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Principle Component Analysis on Colon Cancer Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20K.%20Caecar%20Pratiwi">N. K. Caecar Pratiwi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunendah%20Nur%20Fuadah"> Yunendah Nur Fuadah</a>, <a href="https://publications.waset.org/abstracts/search?q=Rita%20Magdalena"> Rita Magdalena</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20D.%20Atmaja"> R. D. Atmaja</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Saidah"> Sofia Saidah</a>, <a href="https://publications.waset.org/abstracts/search?q=Ocky%20Tiaramukti"> Ocky Tiaramukti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=carcinoma" title="carcinoma">carcinoma</a>, <a href="https://publications.waset.org/abstracts/search?q=colorectal%20cancer" title=" colorectal cancer"> colorectal cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=k-nearest%20neighbor" title=" k-nearest neighbor"> k-nearest neighbor</a>, <a href="https://publications.waset.org/abstracts/search?q=lymphoma" title=" lymphoma"> lymphoma</a>, <a href="https://publications.waset.org/abstracts/search?q=principle%20component%20analysis" title=" principle component analysis"> principle component analysis</a> </p> <a href="https://publications.waset.org/abstracts/105607/principle-component-analysis-on-colon-cancer-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105607.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">205</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Possibility of Creating Polygon Layers from Raster Layers Obtained by using Classic Image Processing Software: Case of Geological Map of Rwanda</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Louis%20Nahimana">Louis Nahimana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most maps are in a raster or pdf format and it is not easy to get vector layers of published maps. Faced to the production of geological simplified map of the northern Lake Tanganyika countries without geological information in vector format, I tried a method of obtaining vector layers from raster layers created from geological maps of Rwanda and DR Congo in pdf and jpg format. The procedure was as follows: The original raster maps were georeferenced using ArcGIS10.2. Under Adobe Photoshop, map areas with the same color corresponding to a lithostratigraphic unit were selected all over the map and saved in a specific raster layer. Using the same image processing software Adobe Photoshop, each RGB raster layer was converted in grayscale type and improved before importation in ArcGIS10. After georeferencing, each lithostratigraphic raster layer was transformed into a multitude of polygons with the tool "Raster to Polygon (Conversion)". Thereafter, tool "Aggregate Polygons (Cartography)" allowed obtaining a single polygon layer. Repeating the same steps for each color corresponding to a homogeneous rock unit, it was possible to reconstruct the simplified geological constitution of Rwanda and the Democratic Republic of Congo in vector format. By using the tool «Append (Management)», vector layers obtained were combined with those from Burundi to achieve vector layers of the geology of the « Northern Lake Tanganyika countries ». <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=creating%20raster%20layer%20under%20image%20processing%20software" title="creating raster layer under image processing software">creating raster layer under image processing software</a>, <a href="https://publications.waset.org/abstracts/search?q=raster%20to%20polygon" title=" raster to polygon"> raster to polygon</a>, <a href="https://publications.waset.org/abstracts/search?q=aggregate%20polygons" title=" aggregate polygons"> aggregate polygons</a>, <a href="https://publications.waset.org/abstracts/search?q=adobe%20photoshop" title=" adobe photoshop"> adobe photoshop</a> </p> <a href="https://publications.waset.org/abstracts/31397/possibility-of-creating-polygon-layers-from-raster-layers-obtained-by-using-classic-image-processing-software-case-of-geological-map-of-rwanda" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31397.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Preparing a Library of Abnormal Masses for Designing a Long-Lasting Anatomical Breast Phantom for Ultrasonography Training</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nasibullina%20A.">Nasibullina A.</a>, <a href="https://publications.waset.org/abstracts/search?q=Leonov%20D."> Leonov D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The ultrasonography method is actively used for the early diagnosis of various le-sions in the human body, including the mammary gland. The incidence of breast cancer has increased by more than 20%, and mortality by 14% since 2008. The correctness of the diagnosis often directly depends on the qualifications and expe-rience of a diagnostic medical sonographer. That is why special attention should be paid to the practical training of future specialists. Anatomical phantoms are ex-cellent teaching tools because they accurately imitate the characteristics of real hu-man tissues and organs. The purpose of this work is to create a breast phantom for practicing ultrasound diagnostic skills in grayscale and elastography imaging, as well as ultrasound-guided biopsy sampling. We used silicone-like compounds ranging from 3 to 17 on the Shore scale hardness units to simulate soft tissue and lesions. Impurities with experimentally selected concentrations were added to give the phantom the necessary attenuation and reflection parameters. We used 3D modeling programs and 3D printing with PLA plastic to create the casting mold. We developed a breast phantom with inclusions of varying shape, elasticity and echogenicity. After testing the created phantom in B-mode and elastography mode, we performed a survey asking 19 participants how realistic the sonograms of the phantom were. The results showed that the closest to real was the model of the cyst with 9.5 on the 0-10 similarity scale. Thus, the developed breast phantom can be used for ultrasonography, elastography, and ultrasound-guided biopsy training. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20ultrasound" title="breast ultrasound">breast ultrasound</a>, <a href="https://publications.waset.org/abstracts/search?q=mammary%20gland" title=" mammary gland"> mammary gland</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a>, <a href="https://publications.waset.org/abstracts/search?q=training%20phantom" title=" training phantom"> training phantom</a>, <a href="https://publications.waset.org/abstracts/search?q=tissue-mimicking%20materials" title=" tissue-mimicking materials"> tissue-mimicking materials</a> </p> <a href="https://publications.waset.org/abstracts/174839/preparing-a-library-of-abnormal-masses-for-designing-a-long-lasting-anatomical-breast-phantom-for-ultrasonography-training" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Iris Cancer Detection System Using Image Processing and Neural Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdulkader%20Helwan">Abdulkader Helwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20cancer" title="iris cancer">iris cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=intraocular%20melanoma" title=" intraocular melanoma"> intraocular melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=cancerous" title=" cancerous"> cancerous</a>, <a href="https://publications.waset.org/abstracts/search?q=prewitt%20edge%20detection%20algorithm" title=" prewitt edge detection algorithm"> prewitt edge detection algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=sclera" title=" sclera"> sclera</a> </p> <a href="https://publications.waset.org/abstracts/16796/iris-cancer-detection-system-using-image-processing-and-neural-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16796.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">503</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A%3A%20Annis%20Fathima">A: Annis Fathima</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Vaidehi"> V. Vaidehi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ajitha"> S. Ajitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20wavelet" title=" Gabor wavelet"> Gabor wavelet</a>, <a href="https://publications.waset.org/abstracts/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/abstracts/search?q=k-NN%20classifier" title=" k-NN classifier"> k-NN classifier</a> </p> <a href="https://publications.waset.org/abstracts/11196/hybrid-approach-for-face-recognition-combining-gabor-wavelet-and-linear-discriminant-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11196.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Barnard Feature Point Detector for Low-Contractperiapical Radiography Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chih-Yi%20Ho">Chih-Yi Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Tzu-Fang%20Chang"> Tzu-Fang Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih-Chia%20Huang"> Chih-Chia Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chia-Yen%20Lee"> Chia-Yen Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In dental clinics, the dentists use the periapical radiography image to assess the effectiveness of endodontic treatment of teeth with chronic apical periodontitis. Periapical radiography images are taken at different times to assess alveolar bone variation before and after the root canal treatment, and furthermore to judge whether the treatment was successful. Current clinical assessment of apical tissue recovery relies only on dentist personal experience. It is difficult to have the same standard and objective interpretations due to the dentist or radiologist personal background and knowledge. If periapical radiography images at the different time could be registered well, the endodontic treatment could be evaluated. In the image registration area, it is necessary to assign representative control points to the transformation model for good performances of registration results. However, detection of representative control points (feature points) on periapical radiography images is generally very difficult. Regardless of which traditional detection methods are practiced, sufficient feature points may not be detected due to the low-contrast characteristics of the x-ray image. Barnard detector is an algorithm for feature point detection based on grayscale value gradients, which can obtain sufficient feature points in the case of gray-scale contrast is not obvious. However, the Barnard detector would detect too many feature points, and they would be too clustered. This study uses the local extrema of clustering feature points and the suppression radius to overcome the problem, and compared different feature point detection methods. In the preliminary result, the feature points could be detected as representative control points by the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20detection" title="feature detection">feature detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Barnard%20detector" title=" Barnard detector"> Barnard detector</a>, <a href="https://publications.waset.org/abstracts/search?q=registration" title=" registration"> registration</a>, <a href="https://publications.waset.org/abstracts/search?q=periapical%20radiography%20image" title=" periapical radiography image"> periapical radiography image</a>, <a href="https://publications.waset.org/abstracts/search?q=endodontic%20treatment" title=" endodontic treatment"> endodontic treatment</a> </p> <a href="https://publications.waset.org/abstracts/67658/barnard-feature-point-detector-for-low-contractperiapical-radiography-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67658.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Iris Recognition Based on the Low Order Norms of Gradient Components</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iman%20A.%20Saad">Iman A. Saad</a>, <a href="https://publications.waset.org/abstracts/search?q=Loay%20E.%20George"> Loay E. George</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title="iris recognition">iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20stretching" title=" contrast stretching"> contrast stretching</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20features" title=" gradient features"> gradient features</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20features" title=" texture features"> texture features</a>, <a href="https://publications.waset.org/abstracts/search?q=Euclidean%20metric" title=" Euclidean metric"> Euclidean metric</a> </p> <a href="https://publications.waset.org/abstracts/13277/iris-recognition-based-on-the-low-order-norms-of-gradient-components" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13277.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">335</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Similar Script Character Recognition on Kannada and Telugu</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gurukiran%20Veerapur">Gurukiran Veerapur</a>, <a href="https://publications.waset.org/abstracts/search?q=Nytik%20Birudavolu"> Nytik Birudavolu</a>, <a href="https://publications.waset.org/abstracts/search?q=Seetharam%20U.%20N."> Seetharam U. N.</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandravva%20Hebbi"> Chandravva Hebbi</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Praneeth%20Reddy"> R. Praneeth Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=base%20characters" title="base characters">base characters</a>, <a href="https://publications.waset.org/abstracts/search?q=modifiers" title=" modifiers"> modifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=guninthalu" title=" guninthalu"> guninthalu</a>, <a href="https://publications.waset.org/abstracts/search?q=aksharas" title=" aksharas"> aksharas</a>, <a href="https://publications.waset.org/abstracts/search?q=vattakshara" title=" vattakshara"> vattakshara</a>, <a href="https://publications.waset.org/abstracts/search?q=VAN" title=" VAN"> VAN</a> </p> <a href="https://publications.waset.org/abstracts/184438/similar-script-character-recognition-on-kannada-and-telugu" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">53</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Immature Palm Tree Detection Using Morphological Filter for Palm Counting with High Resolution Satellite Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nur%20Nadhirah%20Rusyda%20Rosnan">Nur Nadhirah Rusyda Rosnan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nursuhaili%20Najwa%20Masrol"> Nursuhaili Najwa Masrol</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Fatiha%20MD%20Nor"> Nurul Fatiha MD Nor</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Zafrullah%20Mohammad%20Salim"> Mohammad Zafrullah Mohammad Salim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sim%20Choon%20Cheak"> Sim Choon Cheak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate inventories of oil palm planted areas are crucial for plantation management as this would impact the overall economy and production of oil. One of the technological advancements in the oil palm industry is semi-automated palm counting, which is replacing conventional manual palm counting via digitizing aerial imagery. Most of the semi-automated palm counting method that has been developed was limited to mature palms due to their ideal canopy size represented by satellite image. Therefore, immature palms were often left out since the size of the canopy is barely visible from satellite images. In this paper, an approach using a morphological filter and high-resolution satellite image is proposed to detect immature palm trees. This approach makes it possible to count the number of immature oil palm trees. The method begins with an erosion filter with an appropriate window size of 3m onto the high-resolution satellite image. The eroded image was further segmented using watershed segmentation to delineate immature palm tree regions. Then, local minimum detection was used because it is hypothesized that immature oil palm trees are located at the local minimum within an oil palm field setting in a grayscale image. The detection points generated from the local minimum are displaced to the center of the immature oil palm region and thinned. Only one detection point is left that represents a tree. The performance of the proposed method was evaluated on three subsets with slopes ranging from 0 to 20° and different planting designs, i.e., straight and terrace. The proposed method was able to achieve up to more than 90% accuracy when compared with the ground truth, with an overall F-measure score of up to 0.91. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=immature%20palm%20count" title="immature palm count">immature palm count</a>, <a href="https://publications.waset.org/abstracts/search?q=oil%20palm" title=" oil palm"> oil palm</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a> </p> <a href="https://publications.waset.org/abstracts/175726/immature-palm-tree-detection-using-morphological-filter-for-palm-counting-with-high-resolution-satellite-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175726.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> NanoFrazor Lithography for advanced 2D and 3D Nanodevices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhengming%20Wu">Zhengming Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nanofabrication" title="nanofabrication">nanofabrication</a>, <a href="https://publications.waset.org/abstracts/search?q=grayscale%20lithography" title=" grayscale lithography"> grayscale lithography</a>, <a href="https://publications.waset.org/abstracts/search?q=2D%20materials%20device" title=" 2D materials device"> 2D materials device</a>, <a href="https://publications.waset.org/abstracts/search?q=nano-optics" title=" nano-optics"> nano-optics</a>, <a href="https://publications.waset.org/abstracts/search?q=photonics" title=" photonics"> photonics</a>, <a href="https://publications.waset.org/abstracts/search?q=spintronic%20circuits" title=" spintronic circuits"> spintronic circuits</a> </p> <a href="https://publications.waset.org/abstracts/160133/nanofrazor-lithography-for-advanced-2d-and-3d-nanodevices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20F.%20Carvalho">D. F. Carvalho</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20O.%20Uscamayta"> A. O. Uscamayta</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20C.%20Guerrero"> J. C. Guerrero</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20F.%20Oliveira"> H. F. Oliveira</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20M.%20Azevedo-Marques"> P. M. Azevedo-Marques</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radiotherapy" title="radiotherapy">radiotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=DICOM%20RT" title=" DICOM RT"> DICOM RT</a>, <a href="https://publications.waset.org/abstracts/search?q=Treatment%20Planning%20System%20%28TPS%29" title=" Treatment Planning System (TPS)"> Treatment Planning System (TPS)</a> </p> <a href="https://publications.waset.org/abstracts/75533/automatic-segmentation-of-3d-tomographic-images-contours-at-radiotherapy-planning-in-low-cost-solution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75533.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20Oluigbo">David Oluigbo</a>, <a href="https://publications.waset.org/abstracts/search?q=Erik%20Hemberg"> Erik Hemberg</a>, <a href="https://publications.waset.org/abstracts/search?q=Nathan%20Shwatal"> Nathan Shwatal</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenqi%20Ding"> Wenqi Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Yin%20Yuan"> Yin Yuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Susanna%20Mierau"> Susanna Mierau</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=calcium%20imaging" title="calcium imaging">calcium imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20activity" title=" neural activity"> neural activity</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/161680/automated-computer-vision-analysis-pipeline-of-calcium-imaging-neuronal-network-activity-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161680.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ayalew%20Yimam%20%20Ali">Ayalew Yimam Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=micro%20fabrication" title="micro fabrication">micro fabrication</a>, <a href="https://publications.waset.org/abstracts/search?q=3d%20acoustic%20streaming%20flow%20visualization" title=" 3d acoustic streaming flow visualization"> 3d acoustic streaming flow visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=micro-particle%20image%20velocimetry" title=" micro-particle image velocimetry"> micro-particle image velocimetry</a>, <a href="https://publications.waset.org/abstracts/search?q=mixing%20enhancement." title=" mixing enhancement."> mixing enhancement.</a> </p> <a href="https://publications.waset.org/abstracts/190156/mixing-enhancement-with-3d-acoustic-streaming-flow-patterns-induced-by-trapezoidal-triangular-structure-micromixer-using-different-mixing-fluids" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190156.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">20</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ayalew%20Yimam%20Ali">Ayalew Yimam Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=micro%20fabrication" title="micro fabrication">micro fabrication</a>, <a href="https://publications.waset.org/abstracts/search?q=3d%20acoustic%20streaming%20flow%20visualization" title=" 3d acoustic streaming flow visualization"> 3d acoustic streaming flow visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=micro-particle%20image%20velocimetry" title=" micro-particle image velocimetry"> micro-particle image velocimetry</a>, <a href="https://publications.waset.org/abstracts/search?q=mixing%20enhancement" title=" mixing enhancement"> mixing enhancement</a> </p> <a href="https://publications.waset.org/abstracts/190153/flow-visualization-and-mixing-enhancement-in-y-junction-microchannel-with-3d-acoustic-streaming-flow-patterns-induced-by-trapezoidal-triangular-structure-using-high-viscous-liquids" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190153.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">21</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ayalew%20Yimam%20Ali">Ayalew Yimam Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Y-shaped microchannel system is used to mix up low or high viscosities of different fluids, and the laminar flow with high-viscous water-glycerol fluids makes the mixing at the entrance Y-junction region a challenging issue. Acoustic streaming (AS) is time-average, a steady second-order flow phenomenon that could produce rolling motion in the microchannel by oscillating low-frequency range acoustic transducer by inducing acoustic wave in the flow field is the promising strategy to enhance diffusion mass transfer and mixing performance in laminar flow phenomena. In this study, the 3D trapezoidal Structure has been manufactured with advanced CNC machine cutting tools to produce the molds of trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm spine sharp-edge tip depth from PMMA glass (Polymethylmethacrylate) and the microchannel has been fabricated using PDMS (Polydimethylsiloxane) which could be grown-up longitudinally in Y-junction microchannel mixing region top surface to visualized 3D rolling steady acoustic streaming and mixing performance evaluation using high-viscous miscible fluids. The 3D acoustic streaming flow patterns and mixing enhancement were investigated using the micro-particle image velocimetry (μPIV) technique with different spine depth lengths, channel widths, high volume flow rates, oscillation frequencies, and amplitude. The velocity and vorticity flow fields show that a pair of 3D counter-rotating streaming vortices were created around the trapezoidal spine structure and observing high vorticity maps up to 8 times more than the case without acoustic streaming in Y-junction with the high-viscosity water-glycerol mixture fluids. The mixing experiments were performed by using fluorescent green dye solution with de-ionized water on one inlet side, de-ionized water-glycerol with different mass-weight percentage ratios on the other inlet side of the Y-channel and evaluated its performance with the degree of mixing at different amplitudes, flow rates, frequencies, and spine sharp-tip edge angles using the grayscale value of pixel intensity with MATLAB Software. The degree of mixing (M) characterized was found to significantly improved to 0.96.8% with acoustic streaming from 67.42% without acoustic streaming, in the case of 0.0986 μl/min flow rate, 12kHz frequency and 40V oscillation amplitude at y = 2.26 mm. The results suggested the creation of a new 3D steady streaming rolling motion with a high volume flow rate around the entrance junction mixing region, which promotes the mixing of two similar high-viscosity fluids inside the microchannel, which is unable to mix by the laminar flow with low viscous conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nano%20fabrication" title="nano fabrication">nano fabrication</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20acoustic%20streaming%20flow%20visualization" title=" 3D acoustic streaming flow visualization"> 3D acoustic streaming flow visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=micro-particle%20image%20velocimetry" title=" micro-particle image velocimetry"> micro-particle image velocimetry</a>, <a href="https://publications.waset.org/abstracts/search?q=mixing%20enhancement" title=" mixing enhancement"> mixing enhancement</a> </p> <a href="https://publications.waset.org/abstracts/188950/flow-visualization-and-mixing-enhancement-in-y-junction-microchannel-with-3d-acoustic-streaming-flow-patterns-induced-by-trapezoidal-triangular-structure-using-high-viscous-liquids" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188950.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">33</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> A Novel Concept of Optical Immunosensor Based on High-Affinity Recombinant Protein Binders for Tailored Target-Specific Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alena%20Semeradtova">Alena Semeradtova</a>, <a href="https://publications.waset.org/abstracts/search?q=Marcel%20Stofik"> Marcel Stofik</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucie%20Mareckova"> Lucie Mareckova</a>, <a href="https://publications.waset.org/abstracts/search?q=Petr%20Maly"> Petr Maly</a>, <a href="https://publications.waset.org/abstracts/search?q=Ondrej%20Stanek"> Ondrej Stanek</a>, <a href="https://publications.waset.org/abstracts/search?q=Jan%20Maly"> Jan Maly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, novel strategies based on so-called molecular evolution were shown to be effective for the production of various peptide ligand libraries with high affinities to molecular targets of interest comparable or even better than monoclonal antibodies. The major advantage of these peptide scaffolds is mainly their prevailing low molecular weight and simple structure. This study describes a new high-affinity binding molecules based immunesensor using a simple optical system for human serum albumin (HSA) detection as a model molecule. We present a comparison of two variants of recombinant binders based on albumin binding domain of the protein G (ABD) performed on micropatterned glass chip. Binding domains may be tailored to any specific target of interest by molecular evolution. Micropatterened glass chips were prepared using UV-photolithography on chromium sputtered glasses. Glass surface was modified by (3-aminopropyl)trietoxysilane and biotin-PEG-acid using EDC/NHS chemistry. Two variants of high-affinity binding molecules were used to detect target molecule. Firstly, a variant is based on ABD domain fused with TolA chain. This molecule is in vivo biotinylated and each molecule contains one molecule of biotin and one ABD domain. Secondly, the variant is ABD domain based on streptavidin molecule and contains four gaps for biotin and four ABD domains. These high-affinity molecules were immobilized to the chip surface via biotin-streptavidin chemistry. To eliminate nonspecific binding 1% bovine serum albumin (BSA) or 6% fetal bovine serum (FBS) were used in every step. For both variants range of measured concentrations of fluorescently labelled HSA was 0 – 30 µg/ml. As a control, we performed a simultaneous assay without high-affinity binding molecules. Fluorescent signal was measured using inverse fluorescent microscope Olympus IX 70 with COOL LED pE 4000 as a light source, related filters, and camera Retiga 2000R as a detector. The fluorescent signal from non-modified areas was substracted from the signal of the fluorescent areas. Results were presented in graphs showing the dependence of measured grayscale value on the log-scale of HSA concentration. For the TolA variant the limit of detection (LOD) of the optical immunosensor proposed in this study is calculated to be 0,20 µg/ml for HSA detection in 1% BSA and 0,24 µg/ml in 6% FBS. In the case of streptavidin-based molecule, it was 0,04 µg/ml and 0,07 µg/ml respectively. The dynamical range of the immunosensor was possible to estimate just in the case of TolA variant and it was calculated to be 0,49 – 3,75 µg/ml and 0,73-1,88 µg/ml respectively. In the case of the streptavidin-based the variant we didn´t reach the surface saturation even with the 480 ug/ml concentration and the upper value of dynamical range was not estimated. Lower value was calculated to be 0,14 µg/ml and 0,17 µg/ml respectively. Based on the obtained results, it´s clear that both variants are useful for creating the bio-recognizing layer on immunosensors. For this particular system, it is obvious that the variant based on streptavidin molecule is more useful for biosensing on glass planar surfaces. Immunosensors based on this variant would exhibit better limit of detection and wide dynamical range. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high%20affinity%20binding%20molecules" title="high affinity binding molecules">high affinity binding molecules</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20serum%20albumin" title=" human serum albumin"> human serum albumin</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20immunosensor" title=" optical immunosensor"> optical immunosensor</a>, <a href="https://publications.waset.org/abstracts/search?q=protein%20G" title=" protein G"> protein G</a>, <a href="https://publications.waset.org/abstracts/search?q=UV-photolitography" title=" UV-photolitography"> UV-photolitography</a> </p> <a href="https://publications.waset.org/abstracts/38242/a-novel-concept-of-optical-immunosensor-based-on-high-affinity-recombinant-protein-binders-for-tailored-target-specific-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38242.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=grayscale&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=grayscale&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>