CINXE.COM
Search results for: color segmentation
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: color segmentation</title> <meta name="description" content="Search results for: color segmentation"> <meta name="keywords" content="color segmentation"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="color segmentation" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="color segmentation"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1447</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: color segmentation</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1447</span> Image Segmentation Using 2-D Histogram in RGB Color Space in Digital Libraries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Asnaoui%20Khalid">El Asnaoui Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Aksasse%20Brahim"> Aksasse Brahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ouanan%20Mohammed"> Ouanan Mohammed </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=hierarchical%20analysis" title=" hierarchical analysis"> hierarchical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=2-D%20histogram" title=" 2-D histogram"> 2-D histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/42096/image-segmentation-using-2-d-histogram-in-rgb-color-space-in-digital-libraries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1446</span> A Neural Approach for Color-Textured Images Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalid%20Salhi">Khalid Salhi</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Miloud%20Jaara"> El Miloud Jaara</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Talibi%20Alaoui"> Mohammed Talibi Alaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a neural approach for unsupervised natural color-texture image segmentation, which is based on both Kohonen maps and mathematical morphology, using a combination of the texture and the image color information of the image, namely, the fractal features based on fractal dimension are selected to present the information texture, and the color features presented in RGB color space. These features are then used to train the network Kohonen, which will be represented by the underlying probability density function, the segmentation of this map is made by morphological watershed transformation. The performance of our color-texture segmentation approach is compared first, to color-based methods or texture-based methods only, and then to k-means method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=color-texture" title=" color-texture"> color-texture</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal" title=" fractal"> fractal</a>, <a href="https://publications.waset.org/abstracts/search?q=watershed" title=" watershed"> watershed</a> </p> <a href="https://publications.waset.org/abstracts/51740/a-neural-approach-for-color-textured-images-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1445</span> Image Segmentation of Visual Markers in Robotic Tracking System Based on Differential Evolution Algorithm with Connected-Component Labeling</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shu-Yu%20Hsu">Shu-Yu Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Chien%20Hsu"> Chen-Chien Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Yen%20Wang"> Wei-Yen Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color segmentation is a basic and simple way for recognizing the visual markers in a robotic tracking system. In this paper, we propose a new method for color segmentation by incorporating differential evolution algorithm and connected component labeling to autonomously preset the HSV threshold of visual markers. To evaluate the effectiveness of the proposed algorithm, a ROBOTIS OP2 humanoid robot is used to conduct the experiment, where five most commonly used color including red, purple, blue, yellow, and green in visual markers are given for comparisons. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20segmentation" title="color segmentation">color segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20evolution" title=" differential evolution"> differential evolution</a>, <a href="https://publications.waset.org/abstracts/search?q=connected%20component%20labeling" title=" connected component labeling"> connected component labeling</a>, <a href="https://publications.waset.org/abstracts/search?q=humanoid%20robot" title=" humanoid robot"> humanoid robot</a> </p> <a href="https://publications.waset.org/abstracts/34585/image-segmentation-of-visual-markers-in-robotic-tracking-system-based-on-differential-evolution-algorithm-with-connected-component-labeling" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34585.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">605</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1444</span> Use of Segmentation and Color Adjustment for Skin Tone Classification in Dermatological Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fernando%20Duarte">Fernando Duarte</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The work aims to evaluate the use of classical image processing methodologies towards skin tone classification in dermatological images. The skin tone is an important attribute when considering several factor for skin cancer diagnosis. Currently, there is a lack of clear methodologies to classify the skin tone based only on the dermatological image. In this work, a recent released dataset with the label for skin tone was used as reference for the evaluation of classical methodologies for segmentation and adjustment of color space for classification of skin tone in dermatological images. It was noticed that even though the classical methodologies can work fine for segmentation and color adjustment, classifying the skin tone without proper control of the aquisition of the sample images ended being very unreliable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20space" title=" color space"> color space</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20tone" title=" skin tone"> skin tone</a>, <a href="https://publications.waset.org/abstracts/search?q=Fitzpatrick" title=" Fitzpatrick"> Fitzpatrick</a> </p> <a href="https://publications.waset.org/abstracts/188975/use-of-segmentation-and-color-adjustment-for-skin-tone-classification-in-dermatological-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">35</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1443</span> Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seynabou%20Toure">Seynabou Toure</a>, <a href="https://publications.waset.org/abstracts/search?q=Oumar%20Diop"> Oumar Diop</a>, <a href="https://publications.waset.org/abstracts/search?q=Kidiyo%20Kpalma"> Kidiyo Kpalma</a>, <a href="https://publications.waset.org/abstracts/search?q=Amadou%20S.%20Maiga"> Amadou S. Maiga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=coastline" title=" coastline"> coastline</a>, <a href="https://publications.waset.org/abstracts/search?q=color" title=" color"> color</a>, <a href="https://publications.waset.org/abstracts/search?q=sea-land%20segmentation" title=" sea-land segmentation"> sea-land segmentation</a> </p> <a href="https://publications.waset.org/abstracts/84598/best-performing-color-space-for-land-sea-segmentation-using-wavelet-transform-color-texture-features-and-fusion-of-over-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">247</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1442</span> Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lae-Jeong%20Park">Lae-Jeong Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pedestrian%20detection" title="pedestrian detection">pedestrian detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20segmentation" title=" color segmentation"> color segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positive" title=" false positive"> false positive</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/61932/reduction-of-false-positives-in-head-shoulder-detection-based-on-multi-part-color-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61932.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">281</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1441</span> Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mawloud%20Mosbah">Mawloud Mosbah</a>, <a href="https://publications.waset.org/abstracts/search?q=Bachir%20Boucheham"> Bachir Boucheham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CBIR" title="CBIR">CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20global%20histogram" title=" color global histogram"> color global histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20local%20histogram" title=" color local histogram"> color local histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=weak%20segmentation" title=" weak segmentation"> weak segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=Euclidean%20distance" title=" Euclidean distance"> Euclidean distance</a> </p> <a href="https://publications.waset.org/abstracts/14435/selecting-the-best-sub-region-indexing-the-images-in-the-case-of-weak-segmentation-based-on-local-color-histograms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14435.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1440</span> Traffic Light Detection Using Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaishnavi%20Shivde">Vaishnavi Shivde</a>, <a href="https://publications.waset.org/abstracts/search?q=Shrishti%20Sinha"> Shrishti Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Trapti%20Mishra"> Trapti Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20light%20detection" title="traffic light detection">traffic light detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/137254/traffic-light-detection-using-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">173</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1439</span> Preserving Urban Cultural Heritage with Deep Learning: Color Planning for Japanese Merchant Towns</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongqi%20Li">Dongqi Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunjia%20Huang"> Yunjia Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomo%20Inoue"> Tomo Inoue</a>, <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Inoue"> Kohei Inoue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With urbanization, urban cultural heritage is facing the impact and destruction of modernization and urbanization. Many historical areas are losing their historical information and regional cultural characteristics, so it is necessary to carry out systematic color planning for historical areas in conservation. As an early focus on urban color planning, Japan has a systematic approach to urban color planning. Hence, this paper selects five merchant towns from the category of important traditional building preservation areas in Japan as the subject of this study to explore the color structure and emotion of this type of historic area. First, the image semantic segmentation method identifies the buildings, roads, and landscape environments. Their color data were extracted for color composition and emotion analysis to summarize their common features. Second, the obtained Internet evaluations were extracted by natural language processing for keyword extraction. The correlation analysis of the color structure and keywords provides a valuable reference for conservation decisions for this historic area in the town. This paper also combines the color structure and Internet evaluation results with generative adversarial networks to generate predicted images of color structure improvements and color improvement schemes. The methods and conclusions of this paper can provide new ideas for the digital management of environmental colors in historic districts and provide a valuable reference for the inheritance of local traditional culture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=historic%20districts" title="historic districts">historic districts</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20planning" title=" color planning"> color planning</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a> </p> <a href="https://publications.waset.org/abstracts/165749/preserving-urban-cultural-heritage-with-deep-learning-color-planning-for-japanese-merchant-towns" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165749.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1438</span> Image Segmentation Techniques: Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lindani%20Mbatha">Lindani Mbatha</a>, <a href="https://publications.waset.org/abstracts/search?q=Suvendi%20Rimer"> Suvendi Rimer</a>, <a href="https://publications.waset.org/abstracts/search?q=Mpho%20Gololo"> Mpho Gololo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering-based" title="clustering-based">clustering-based</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution-network" title=" convolution-network"> convolution-network</a>, <a href="https://publications.waset.org/abstracts/search?q=edge-based" title=" edge-based"> edge-based</a>, <a href="https://publications.waset.org/abstracts/search?q=region-growing" title=" region-growing"> region-growing</a> </p> <a href="https://publications.waset.org/abstracts/166513/image-segmentation-techniques-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166513.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">96</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1437</span> An Automated System for the Detection of Citrus Greening Disease Based on Visual Descriptors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sidra%20Naeem">Sidra Naeem</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayesha%20Naeem"> Ayesha Naeem</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahar%20Rahim"> Sahar Rahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Nawaz%20Qadri"> Nadia Nawaz Qadri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Citrus greening is a bacterial disease that causes considerable damage to citrus fruits worldwide. Efficient method for this disease detection must be carried out to minimize the production loss. This paper presents a pattern recognition system that comprises three stages for the detection of citrus greening from Orange leaves: segmentation, feature extraction and classification. Image segmentation is accomplished by adaptive thresholding. The feature extraction stage comprises of three visual descriptors i.e. shape, color and texture. From shape feature we have used asymmetry index, from color feature we have used histogram of Cb component from YCbCr domain and from texture feature we have used local binary pattern. Classification was done using support vector machines and k nearest neighbors. The best performances of the system is Accuracy = 88.02% and AUROC = 90.1% was achieved by automatic segmented images. Our experiments validate that: (1). Segmentation is an imperative preprocessing step for computer assisted diagnosis of citrus greening, and (2). The combination of shape, color and texture features form a complementary set towards the identification of citrus greening disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=citrus%20greening" title="citrus greening">citrus greening</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/98969/an-automated-system-for-the-detection-of-citrus-greening-disease-based-on-visual-descriptors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1436</span> The Laser Line Detection for Autonomous Mapping Based on Color Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pavel%20Chmelar">Pavel Chmelar</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Dobrovolny"> Martin Dobrovolny</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Laser projection or laser footprint detection is today widely used in many fields of robotics, measurement, or electronics. The system accuracy strictly depends on precise laser footprint detection on target objects. This article deals with the laser line detection based on the RGB segmentation and the component labeling. As a measurement device was used the developed optical rangefinder. The optical rangefinder is equipped with vertical sweeping of the laser beam and high quality camera. This system was developed mainly for automatic exploration and mapping of unknown spaces. In the first section is presented a new detection algorithm. In the second section are presented measurements results. The measurements were performed in variable light conditions in interiors. The last part of the article present achieved results and their differences between day and night measurements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20segmentation" title="color segmentation">color segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=component%20labelling" title=" component labelling"> component labelling</a>, <a href="https://publications.waset.org/abstracts/search?q=laser%20line%20detection" title=" laser line detection"> laser line detection</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20mapping" title=" automatic mapping"> automatic mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20measurement" title=" distance measurement"> distance measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20map" title=" vector map"> vector map</a> </p> <a href="https://publications.waset.org/abstracts/1789/the-laser-line-detection-for-autonomous-mapping-based-on-color-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1435</span> An Improved C-Means Model for MRI Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ying%20Shen">Ying Shen</a>, <a href="https://publications.waset.org/abstracts/search?q=Weihua%20Zhu"> Weihua Zhu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=magnetic%20resonance%20image%20%28MRI%29" title="magnetic resonance image (MRI)">magnetic resonance image (MRI)</a>, <a href="https://publications.waset.org/abstracts/search?q=c-means%20model" title=" c-means model"> c-means model</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20entropy" title=" information entropy"> information entropy</a> </p> <a href="https://publications.waset.org/abstracts/79824/an-improved-c-means-model-for-mri-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79824.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">225</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1434</span> Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elham%20Alaee">Elham Alaee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mousa%20Shamsi"> Mousa Shamsi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Ahmadi"> Hossein Ahmadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Soroosh%20Nazem"> Soroosh Nazem</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Hossein%20Sedaaghi"> Mohammad Hossein Sedaaghi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy C-Means (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic C-Means (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20image" title="facial image">facial image</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=PCM" title=" PCM"> PCM</a>, <a href="https://publications.waset.org/abstracts/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20error" title=" skin error"> skin error</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20surgery" title=" facial surgery"> facial surgery</a> </p> <a href="https://publications.waset.org/abstracts/10297/automatic-facial-skin-segmentation-using-possibilistic-c-means-algorithm-for-evaluation-of-facial-surgeries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10297.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1433</span> FISCEAPP: FIsh Skin Color Evaluation APPlication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Urban">J. Urban</a>, <a href="https://publications.waset.org/abstracts/search?q=%C3%81.%20S.%20Botella"> Á. S. Botella</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20E.%20Robaina"> L. E. Robaina</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20B%C3%A1rta"> A. Bárta</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sou%C4%8Dek"> P. Souček</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20C%C3%ADsa%C5%99"> P. Císař</a>, <a href="https://publications.waset.org/abstracts/search?q=%C5%A0.%20Pap%C3%A1%C4%8Dek"> Š. Papáček</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20M.%20Dom%C3%ADnguez"> L. M. Domínguez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin coloration in fish is of great physiological, behavioral and ecological importance and can be considered as an index of animal welfare in aquaculture as well as an important quality factor in the retail value. Currently, in order to compare color in animals fed on different diets, biochemical analysis, and colorimetry of fished, mildly anesthetized or dead body, are very accurate and meaningful measurements. The noninvasive method using digital images of the fish body was developed as a standalone application. This application deals with the computation burden and memory consumption of large input files, optimizing piece wise processing and analysis with the memory/computation time ratio. For the comparison of color distributions of various experiments and different color spaces (RGB, CIE L*a*b*) the comparable semi-equidistant binning of multi channels representation is introduced. It is derived from the knowledge of quantization levels and Freedman-Diaconis rule. The color calibrations and camera responsivity function were necessary part of the measurement process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20distribution" title="color distribution">color distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=fish%20skin%20color" title=" fish skin color"> fish skin color</a>, <a href="https://publications.waset.org/abstracts/search?q=piecewise%20transformation" title=" piecewise transformation"> piecewise transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20to%20background%20segmentation" title=" object to background segmentation"> object to background segmentation</a> </p> <a href="https://publications.waset.org/abstracts/15406/fisceapp-fish-skin-color-evaluation-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15406.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1432</span> Evaluating the Performance of Color Constancy Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Damanjit%20Kaur">Damanjit Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Avani%20Bhatia"> Avani Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20constancy" title="color constancy">color constancy</a>, <a href="https://publications.waset.org/abstracts/search?q=gray%20world" title=" gray world"> gray world</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20patch" title=" white patch"> white patch</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20white%20patch" title=" modified white patch "> modified white patch </a> </p> <a href="https://publications.waset.org/abstracts/4799/evaluating-the-performance-of-color-constancy-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4799.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1431</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1430</span> Automatic Detection of Proliferative Cells in Immunohistochemically Images of Meningioma Using Fuzzy C-Means Clustering and HSV Color Space</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vahid%20Anari">Vahid Anari</a>, <a href="https://publications.waset.org/abstracts/search?q=Mina%20Bakhshi"> Mina Bakhshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual search and identification of immunohistochemically stained tissue of meningioma was performed manually in pathologic laboratories to detect and diagnose the cancers type of meningioma. This task is very tedious and time-consuming. Moreover, because of cell's complex nature, it still remains a challenging task to segment cells from its background and analyze them automatically. In this paper, we develop and test a computerized scheme that can automatically identify cells in microscopic images of meningioma and classify them into positive (proliferative) and negative (normal) cells. Dataset including 150 images are used to test the scheme. The scheme uses Fuzzy C-means algorithm as a color clustering method based on perceptually uniform hue, saturation, value (HSV) color space. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=positive%20cell" title="positive cell">positive cell</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20segmentation" title=" color segmentation"> color segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color%20space" title=" HSV color space"> HSV color space</a>, <a href="https://publications.waset.org/abstracts/search?q=immunohistochemistry" title=" immunohistochemistry"> immunohistochemistry</a>, <a href="https://publications.waset.org/abstracts/search?q=meningioma" title=" meningioma"> meningioma</a>, <a href="https://publications.waset.org/abstracts/search?q=thresholding" title=" thresholding"> thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20c-means" title=" fuzzy c-means"> fuzzy c-means</a> </p> <a href="https://publications.waset.org/abstracts/109638/automatic-detection-of-proliferative-cells-in-immunohistochemically-images-of-meningioma-using-fuzzy-c-means-clustering-and-hsv-color-space" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109638.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1429</span> A Comparative Study of Medical Image Segmentation Methods for Tumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mayssa%20Bensalah">Mayssa Bensalah</a>, <a href="https://publications.waset.org/abstracts/search?q=Atef%20Boujelben"> Atef Boujelben</a>, <a href="https://publications.waset.org/abstracts/search?q=Mouna%20Baklouti"> Mouna Baklouti</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Abid"> Mohamed Abid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation has a fundamental role in analysis and interpretation for many applications. The automated segmentation of organs and tissues throughout the body using computed imaging has been rapidly increasing. Indeed, it represents one of the most important parts of clinical diagnostic tools. In this paper, we discuss a thorough literature review of recent methods of tumour segmentation from medical images which are briefly explained with the recent contribution of various researchers. This study was followed by comparing these methods in order to define new directions to develop and improve the performance of the segmentation of the tumour area from medical images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=features%20extraction" title="features extraction">features extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=tumor%20detection" title=" tumor detection"> tumor detection</a> </p> <a href="https://publications.waset.org/abstracts/132616/a-comparative-study-of-medical-image-segmentation-methods-for-tumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132616.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1428</span> Toward Automatic Chest CT Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Angely%20Sim%20Jia%20Wun">Angely Sim Jia Wun</a>, <a href="https://publications.waset.org/abstracts/search?q=Sasa%20Arsovski"> Sasa Arsovski</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Numerous studies have been conducted on the segmentation of medical images. Segmenting the lungs is one of the common research topics in those studies. Our research stemmed from the lack of solutions for automatic bone, airway, and vessel segmentation, despite the existence of multiple lung segmentation techniques. Consequently, currently, available software tools used for medical image segmentation do not provide automatic lung, bone, airway, and vessel segmentation. This paper presents segmentation techniques along with an interactive software tool architecture for segmenting bone, lung, airway, and vessel tissues. Additionally, we propose a method for creating binary masks from automatically generated segments. The key contribution of our approach is the technique for automatic image thresholding using adjustable Hounsfield values and binary mask extraction. Generated binary masks can be successfully used as a training dataset for deep-learning solutions in medical image segmentation. In this paper, we also examine the current software tools used for medical image segmentation, discuss our approach, and identify its advantages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lung%20segmentation" title="lung segmentation">lung segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20masks" title=" binary masks"> binary masks</a>, <a href="https://publications.waset.org/abstracts/search?q=U-Net" title=" U-Net"> U-Net</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20software%20tools" title=" medical software tools"> medical software tools</a> </p> <a href="https://publications.waset.org/abstracts/168342/toward-automatic-chest-ct-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1427</span> A Product-Specific/Unobservable Approach to Segmentation for a Value Expressive Credit Card Service</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manfred%20F.%20Maute">Manfred F. Maute</a>, <a href="https://publications.waset.org/abstracts/search?q=Olga%20Naumenko"> Olga Naumenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Raymond%20T.%20Kong"> Raymond T. Kong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Using data from a nationally representative financial panel of Canadian households, this study develops a psychographic segmentation of the customers of a value-expressive credit card service and tests for effects on relational response differences. The variety of segments elicited by agglomerative and k means clustering and the familiar profiles of individual clusters suggest that the face validity of the psychographic segmentation was quite high. Segmentation had a significant effect on customer satisfaction and relationship depth. However, when socio-demographic characteristics like household size and income were accounted for in the psychographic segmentation, the effect on relational response differences was magnified threefold. Implications for the segmentation of financial services markets are considered. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=customer%20satisfaction" title="customer satisfaction">customer satisfaction</a>, <a href="https://publications.waset.org/abstracts/search?q=financial%20services" title=" financial services"> financial services</a>, <a href="https://publications.waset.org/abstracts/search?q=psychographics" title=" psychographics"> psychographics</a>, <a href="https://publications.waset.org/abstracts/search?q=response%20differences" title=" response differences"> response differences</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/39282/a-product-specificunobservable-approach-to-segmentation-for-a-value-expressive-credit-card-service" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39282.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">334</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1426</span> A Comparison between Different Segmentation Techniques Used in Medical Imaging </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ibtihal%20D.%20Mustafa">Ibtihal D. Mustafa</a>, <a href="https://publications.waset.org/abstracts/search?q=Mawia%20A.%20Hassan"> Mawia A. Hassan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tumor segmentation from MRI image is important part of medical images experts. This is particularly a challenging task because of the high assorting appearance of tumor tissue among different patients. MRI images are advance of medical imaging because it is give richer information about human soft tissue. There are different segmentation techniques to detect MRI brain tumor. In this paper, different procedure segmentation methods are used to segment brain tumors and compare the result of segmentations by using correlation and structural similarity index (SSIM) to analysis and see the best technique that could be applied to MRI image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MRI" title="MRI">MRI</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20similarity" title=" structural similarity"> structural similarity</a> </p> <a href="https://publications.waset.org/abstracts/51091/a-comparison-between-different-segmentation-techniques-used-in-medical-imaging" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51091.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1425</span> Multidimensional Sports Spectators Segmentation and Social Media Marketing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Schmid">B. Schmid</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Kexel"> C. Kexel</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Djafarova"> E. Djafarova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Understanding consumers is elementary for practitioners in marketing. Consumers of sports events, the sports spectators, are a particularly complex consumer crowd. In order to identify and define their profiles different segmentation approaches can be found in literature, one of them being multidimensional segmentation. Multidimensional segmentation models correspond to the broad range of attitudes, behaviours, motivations and beliefs of sports spectators, other than earlier models. Moreover, in sports there are some well-researched disciplines (e.g. football or North American sports) where consumer profiles and marketing strategies are elaborate and others where no research at all can be found. For example, there is almost no research on athletics spectators. This paper explores the current state of research on sports spectators segmentation. An in-depth literature review provides the framework for a spectators segmentation in athletics. On this basis, additional potential consumer groups and implications for social media marketing will be explored. The findings are the basis for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multidimensional%20segmentation" title="multidimensional segmentation">multidimensional segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=sports%20marketing" title=" sports marketing"> sports marketing</a>, <a href="https://publications.waset.org/abstracts/search?q=sports%20spectators%20segmentation" title=" sports spectators segmentation"> sports spectators segmentation</a> </p> <a href="https://publications.waset.org/abstracts/47477/multidimensional-sports-spectators-segmentation-and-social-media-marketing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47477.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">307</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1424</span> Arabic Handwriting Recognition Using Local Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Arif">Mohammed Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdessalam%20Kifouche"> Abdessalam Kifouche</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Optical character recognition (OCR) has a main role in the present time. It's capable to solve many serious problems and simplify human activities. The OCR yields to 70's, since many solutions has been proposed, but unfortunately, it was supportive to nothing but Latin languages. This work proposes a system of recognition of an off-line Arabic handwriting. This system is based on a structural segmentation method and uses support vector machines (SVM) in the classification phase. We have presented a state of art of the characters segmentation methods, after that a view of the OCR area, also we will address the normalization problems we went through. After a comparison between the Arabic handwritten characters & the segmentation methods, we had introduced a contribution through a segmentation algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=OCR" title="OCR">OCR</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20characters" title=" Arabic characters"> Arabic characters</a>, <a href="https://publications.waset.org/abstracts/search?q=PAW" title=" PAW"> PAW</a>, <a href="https://publications.waset.org/abstracts/search?q=post-processing" title=" post-processing"> post-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/183495/arabic-handwriting-recognition-using-local-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183495.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1423</span> Imp_hist-Si: Improved Hybrid Image Segmentation Technique for Satellite Imagery to Decrease the Segmentation Error Rate</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neetu%20Manocha">Neetu Manocha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is a technique where a picture is parted into distinct parts having similar features which have a place with similar items. Various segmentation strategies have been proposed as of late by prominent analysts. But, after ultimate thorough research, the novelists have analyzed that generally, the old methods do not decrease the segmentation error rate. Then author finds the technique HIST-SI to decrease the segmentation error rates. In this technique, cluster-based and threshold-based segmentation techniques are merged together. After then, to improve the result of HIST-SI, the authors added the method of filtering and linking in this technique named Imp_HIST-SI to decrease the segmentation error rates. The goal of this research is to find a new technique to decrease the segmentation error rates and produce much better results than the HIST-SI technique. For testing the proposed technique, a dataset of Bhuvan – a National Geoportal developed and hosted by ISRO (Indian Space Research Organisation) is used. Experiments are conducted using Scikit-image & OpenCV tools of Python, and performance is evaluated and compared over various existing image segmentation techniques for several matrices, i.e., Mean Square Error (MSE) and Peak Signal Noise Ratio (PSNR). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title="satellite image">satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20rate" title=" error rate"> error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=MSE" title=" MSE"> MSE</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=HIST-SI" title=" HIST-SI"> HIST-SI</a>, <a href="https://publications.waset.org/abstracts/search?q=linking" title=" linking"> linking</a>, <a href="https://publications.waset.org/abstracts/search?q=filtering" title=" filtering"> filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=imp_HIST-SI" title=" imp_HIST-SI"> imp_HIST-SI</a> </p> <a href="https://publications.waset.org/abstracts/149905/imp-hist-si-improved-hybrid-image-segmentation-technique-for-satellite-imagery-to-decrease-the-segmentation-error-rate" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149905.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1422</span> Performance Evaluation of Various Segmentation Techniques on MRI of Brain Tissue</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.V.%20Suryawanshi">U.V. Suryawanshi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.S.%20Chowhan"> S.S. Chowhan</a>, <a href="https://publications.waset.org/abstracts/search?q=U.V%20Kulkarni"> U.V Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accuracy of segmentation methods is of great importance in brain image analysis. Tissue classification in Magnetic Resonance brain images (MRI) is an important issue in the analysis of several brain dementias. This paper portraits performance of segmentation techniques that are used on Brain MRI. A large variety of algorithms for segmentation of Brain MRI has been developed. The objective of this paper is to perform a segmentation process on MR images of the human brain, using Fuzzy c-means (FCM), Kernel based Fuzzy c-means clustering (KFCM), Spatial Fuzzy c-means (SFCM) and Improved Fuzzy c-means (IFCM). The review covers imaging modalities, MRI and methods for noise reduction and segmentation approaches. All methods are applied on MRI brain images which are degraded by salt-pepper noise demonstrate that the IFCM algorithm performs more robust to noise than the standard FCM algorithm. We conclude with a discussion on the trend of future research in brain segmentation and changing norms in IFCM for better results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=preprocessing" title=" preprocessing"> preprocessing</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI" title=" MRI"> MRI</a>, <a href="https://publications.waset.org/abstracts/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/abstracts/search?q=KFCM" title=" KFCM"> KFCM</a>, <a href="https://publications.waset.org/abstracts/search?q=SFCM" title=" SFCM"> SFCM</a>, <a href="https://publications.waset.org/abstracts/search?q=IFCM" title=" IFCM"> IFCM</a> </p> <a href="https://publications.waset.org/abstracts/12406/performance-evaluation-of-various-segmentation-techniques-on-mri-of-brain-tissue" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12406.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">331</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1421</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1420</span> Object Detection Based on Plane Segmentation and Features Matching for a Service Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ant%C3%B3nio%20J.%20R.%20Neves">António J. R. Neves</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Garcia"> Rui Garcia</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Dias"> Paulo Dias</a>, <a href="https://publications.waset.org/abstracts/search?q=Alina%20Trifan"> Alina Trifan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the aging of the world population and the continuous growth in technology, service robots are more and more explored nowadays as alternatives to healthcare givers or personal assistants for the elderly or disabled people. Any service robot should be capable of interacting with the human companion, receive commands, navigate through the environment, either known or unknown, and recognize objects. This paper proposes an approach for object recognition based on the use of depth information and color images for a service robot. We present a study on two of the most used methods for object detection, where 3D data is used to detect the position of objects to classify that are found on horizontal surfaces. Since most of the objects of interest accessible for service robots are on these surfaces, the proposed 3D segmentation reduces the processing time and simplifies the scene for object recognition. The first approach for object recognition is based on color histograms, while the second is based on the use of the SIFT and SURF feature descriptors. We present comparative experimental results obtained with a real service robot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title="object detection">object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature" title=" feature"> feature</a>, <a href="https://publications.waset.org/abstracts/search?q=descriptors" title=" descriptors"> descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF" title=" SURF"> SURF</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20images" title=" depth images"> depth images</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20robots" title=" service robots"> service robots</a> </p> <a href="https://publications.waset.org/abstracts/39840/object-detection-based-on-plane-segmentation-and-features-matching-for-a-service-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">545</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1419</span> Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lianzhong%20Zhang">Lianzhong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Huang"> Chao Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SAR" title="SAR">SAR</a>, <a href="https://publications.waset.org/abstracts/search?q=sea-land%20segmentation" title=" sea-land segmentation"> sea-land segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a> </p> <a href="https://publications.waset.org/abstracts/148759/sea-land-segmentation-method-based-on-the-transformer-with-enhanced-edge-supervision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">181</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1418</span> Counting People Utilizing Space-Time Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Elmarhomy">Ahmed Elmarhomy</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An automated method for counting passerby has been proposed using virtual-vertical measurement lines. Space-time image is representing the human regions which are treated using the segmentation process. Different color space has been used to perform the template matching. A proper template matching has been achieved to determine direction and speed of passing people. Distinguish one or two passersby has been investigated using a correlation between passerby speed and the human-pixel area. Finally, the effectiveness of the presented method has been experimentally verified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=counting%20people" title="counting people">counting people</a>, <a href="https://publications.waset.org/abstracts/search?q=measurement%20line" title=" measurement line"> measurement line</a>, <a href="https://publications.waset.org/abstracts/search?q=space-time%20image" title=" space-time image"> space-time image</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a> </p> <a href="https://publications.waset.org/abstracts/46877/counting-people-utilizing-space-time-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46877.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=48">48</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=49">49</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20segmentation&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>