CINXE.COM

Search results for: function of the country image

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: function of the country image</title> <meta name="description" content="Search results for: function of the country image"> <meta name="keywords" content="function of the country image"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="function of the country image" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="function of the country image"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 11194</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: function of the country image</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11194</span> Definition, Structure, and Core Functions of the State Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosa%20Nurtazina">Rosa Nurtazina</a>, <a href="https://publications.waset.org/abstracts/search?q=Yerkebulan%20Zhumashov"> Yerkebulan Zhumashov</a>, <a href="https://publications.waset.org/abstracts/search?q=Maral%20Tomanova"> Maral Tomanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Humanity is entering an era when 'virtual reality' as the image of the world created by the media with the help of the Internet does not match the reality in many respects, when new communication technologies create a fundamentally different and previously unknown 'global space'. According to these technologies, the state begins to change the basic technology of political communication of the state and society, the state and the state. Nowadays, image of the state becomes the most important tool and technology. Image is a purposefully created image granting political object (person, organization, country, etc.) certain social and political values and promoting more emotional perception. Political image of the state plays an important role in international relations. The success of the country's foreign policy, development of trade and economic relations with other countries depends on whether it is positive or negative. Foreign policy image has an impact on political processes taking place in the state: the negative image of the countries can be used by opposition forces as one of the arguments to criticize the government and its policies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20country" title="image of the country">image of the country</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20classification" title=" country&#039;s image classification"> country&#039;s image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image" title=" function of the country image"> function of the country image</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20components" title=" country&#039;s image components"> country&#039;s image components</a> </p> <a href="https://publications.waset.org/abstracts/5104/definition-structure-and-core-functions-of-the-state-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5104.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">434</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11193</span> Image Enhancement of Histological Slides by Using Nonlinear Transfer Function</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Suman">D. Suman</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Nikitha"> B. Nikitha</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Sarvani"> J. Sarvani</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Archana"> V. Archana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Histological slides provide clinical diagnostic information about the subjects from the ancient times. Even with the advent of high resolution imaging cameras the image tend to have some background noise which makes the analysis complex. A study of the histological slides is done by using a nonlinear transfer function based image enhancement method. The method processes the raw, color images acquired from the biological microscope, which, in general, is associated with background noise. The images usually appearing blurred does not convey the intended information. In this regard, an enhancement method is proposed and implemented on 50 histological slides of human tissue by using nonlinear transfer function method. The histological image is converted into HSV color image. The luminance value of the image is enhanced (V component) because change in the H and S components could change the color balance between HSV components. The HSV image is divided into smaller blocks for carrying out the dynamic range compression by using a linear transformation function. Each pixel in the block is enhanced based on the contrast of the center pixel and its neighborhood. After the processing the V component, the HSV image is transformed into a colour image. The study has shown improvement of the characteristics of the image so that the significant details of the histological images were improved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HSV%20space" title="HSV space">HSV space</a>, <a href="https://publications.waset.org/abstracts/search?q=histology" title=" histology"> histology</a>, <a href="https://publications.waset.org/abstracts/search?q=enhancement" title=" enhancement"> enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a> </p> <a href="https://publications.waset.org/abstracts/12167/image-enhancement-of-histological-slides-by-using-nonlinear-transfer-function" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11192</span> Development of Algorithms for the Study of the Image in Digital Form for Satellite Applications: Extraction of a Road Network and Its Nodes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zineb%20Nougrara">Zineb Nougrara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a novel methodology for extracting a road network and its nodes from satellite images of Algeria country. This developed technique is a progress of our previous research works. It is founded on the information theory and the mathematical morphology; the information theory and the mathematical morphology are combined together to extract and link the road segments to form a road network and its nodes. We, therefore, have to define objects as sets of pixels and to study the shape of these objects and the relations that exist between them. In this approach, geometric and radiometric features of roads are integrated by a cost function and a set of selected points of a crossing road. Its performances were tested on satellite images of Algeria country. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title="satellite image">satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20network" title=" road network"> road network</a>, <a href="https://publications.waset.org/abstracts/search?q=nodes" title=" nodes"> nodes</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis%20and%20processing" title=" image analysis and processing"> image analysis and processing</a> </p> <a href="https://publications.waset.org/abstracts/27882/development-of-algorithms-for-the-study-of-the-image-in-digital-form-for-satellite-applications-extraction-of-a-road-network-and-its-nodes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27882.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11191</span> The Interaction of Country-of-Manufacturing with Country-of-Design within Different Consumption Context</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ebru%20Genc">Ebru Genc</a>, <a href="https://publications.waset.org/abstracts/search?q=Shih-Ching%20Wang"> Shih-Ching Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today’s globalized world, while companies move their production centers to developing countries in order to gain cost advantage, they receive negative responses from consumers because of the weak image of those countries. In this study, we looked at this tradeoff faced by multinational companies. Some companies that have headquarters in developed countries have devised a strategy of manipulating country-of-origin (COO) information by introducing the concept of country of design (COD). We analyzed the impact of country-of-manufacturing (COM) information on consumers’ product evaluation and purchase intention in the presence of different levels of COD information, namely, in terms of developed and developing countries. We found that it is not advantageous for a firm to publish a design location with a strong image if the firm is producing in a country that has a weak image. On the other hand, revealing COD information has a reinforcing effect on consumers’ product evaluation and purchase intention if the firm is producing in a country with a strong image. Second, we studied the impact of consumption context on this relationship (in terms of public or private use) and found that for products that are typically used in public, COM has significantly shown higher importance on product evaluation and purchase intention, compared to products typically used in private. However, our results show that consumption context shows no effect of an impact resulting from COD information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=consumption%20context" title="consumption context">consumption context</a>, <a href="https://publications.waset.org/abstracts/search?q=country%20of%20design" title=" country of design"> country of design</a>, <a href="https://publications.waset.org/abstracts/search?q=country%20of%20manufacturing" title=" country of manufacturing"> country of manufacturing</a>, <a href="https://publications.waset.org/abstracts/search?q=country%20of%20origin" title=" country of origin"> country of origin</a> </p> <a href="https://publications.waset.org/abstracts/54939/the-interaction-of-country-of-manufacturing-with-country-of-design-within-different-consumption-context" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11190</span> Active Contours for Image Segmentation Based on Complex Domain Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sajid%20Hussain">Sajid Hussain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The complex domain approach for image segmentation based on active contour has been designed, which deforms step by step to partition an image into numerous expedient regions. A novel region-based trigonometric complex pressure force function is proposed, which propagates around the region of interest using image forces. The signed trigonometric force function controls the propagation of the active contour and the active contour stops on the exact edges of the object accurately. The proposed model makes the level set function binary and uses Gaussian smoothing kernel to adjust and escape the re-initialization procedure. The working principle of the proposed model is as follows: The real image data is transformed into complex data by iota (i) times of image data and the average iota (i) times of horizontal and vertical components of the gradient of image data is inserted in the proposed model to catch complex gradient of the image data. A simple finite difference mathematical technique has been used to implement the proposed model. The efficiency and robustness of the proposed model have been verified and compared with other state-of-the-art models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=active%20contour" title=" active contour"> active contour</a>, <a href="https://publications.waset.org/abstracts/search?q=level%20set" title=" level set"> level set</a>, <a href="https://publications.waset.org/abstracts/search?q=Mumford%20and%20Shah%20model" title=" Mumford and Shah model"> Mumford and Shah model</a> </p> <a href="https://publications.waset.org/abstracts/161606/active-contours-for-image-segmentation-based-on-complex-domain-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11189</span> Hyperspectral Image Classification Using Tree Search Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreya%20Pare">Shreya Pare</a>, <a href="https://publications.waset.org/abstracts/search?q=Parvin%20Akhter"> Parvin Akhter</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20images" title=" hyperspectral images"> hyperspectral images</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20distribution%20margin" title=" large distribution margin"> large distribution margin</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20fuzzy%20entropy%20function" title=" modified fuzzy entropy function"> modified fuzzy entropy function</a>, <a href="https://publications.waset.org/abstracts/search?q=multilevel%20thresholding" title=" multilevel thresholding"> multilevel thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20search%20algorithm" title=" tree search algorithm"> tree search algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20image%20classification%20using%20tree%20search%20algorithm" title=" hyperspectral image classification using tree search algorithm"> hyperspectral image classification using tree search algorithm</a> </p> <a href="https://publications.waset.org/abstracts/143284/hyperspectral-image-classification-using-tree-search-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143284.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11188</span> The Role of Attachment Styles, Gender Schemas, Sexual Self Schemas, and Body Exposures During Sexual Activity in Sexual Function, Marital Satisfaction, and Sexual Self-Esteem</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Shareh">Hossein Shareh</a>, <a href="https://publications.waset.org/abstracts/search?q=Farhad%20Seifi"> Farhad Seifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study was to examine the role of attachment styles, gender schemas, sexual-self schemas, and body image during sexual activity in sexual function, marital satisfaction, and sexual self-esteem. The sampling method was among married women who were living in Mashhad; a snowball selected 765 people. Questionnaires and measures of adult attachment style (AAS), Bem Sex Role Inventory (BSRI), sexual self-schema (SSS), body exposure during sexual activity questionnaire (BESAQ), sexual function female inventory (FSFI), a short form of sexual self-esteem (SSEI-W-SF) and marital satisfaction (Enrich) were completed by participants. Data analysis using Pearson correlation and hierarchical regression and case analysis was performed by SPSS-19 software. The results showed that there is a significant correlation (P <0.05) between attachment and sexual function (r=0.342), marital satisfaction (r=0.351) and sexual self-esteem (r =0.292). A correlation (P <0.05) was observed between sexual schema (r=0.342) and sexual esteem (r=0.31). A meaningful correlation (P <0.05) exists between gender stereotypes and sexual function (r=0.352). There was a significant inverse correlation (P <0.05) between body image and their performance during sexual activity (r=0.41). There is no significant relationship between gender schemas, sexual schemas, body image, and marital satisfaction, and no relation was found between gender schemas, body image, and sexual self-esteem. Also, the result of the regression showed that attachment styles, gender schemas, sexual self- schemas, and body exposures during sexual activity are predictable in sexual function, and marital satisfaction can be predicted by attachment style and gender schema. Somewhat, sexual self-esteem can be expected by attachment style and gender schemas. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attachment%20styles" title="attachment styles">attachment styles</a>, <a href="https://publications.waset.org/abstracts/search?q=gender%20and%20sexual%20schemas" title=" gender and sexual schemas"> gender and sexual schemas</a>, <a href="https://publications.waset.org/abstracts/search?q=body%20image" title=" body image"> body image</a>, <a href="https://publications.waset.org/abstracts/search?q=sexual%20function" title=" sexual function"> sexual function</a>, <a href="https://publications.waset.org/abstracts/search?q=marital%20satisfaction" title=" marital satisfaction"> marital satisfaction</a>, <a href="https://publications.waset.org/abstracts/search?q=sexual%20self-esteem" title=" sexual self-esteem"> sexual self-esteem</a> </p> <a href="https://publications.waset.org/abstracts/186720/the-role-of-attachment-styles-gender-schemas-sexual-self-schemas-and-body-exposures-during-sexual-activity-in-sexual-function-marital-satisfaction-and-sexual-self-esteem" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186720.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">39</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11187</span> High Speed Image Rotation Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hee-Choul%20Kwon">Hee-Choul Kwon</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyungjin%20Cho"> Hyungjin Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Heeyong%20Kwon"> Heeyong Kwon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image rotation is one of main pre-processing step in image processing or image pattern recognition. It is implemented with rotation matrix multiplication. However it requires lots of floating point arithmetic operations and trigonometric function calculations, so it takes long execution time. We propose a new high speed image rotation algorithm without two major time-consuming operations. We compare the proposed algorithm with the conventional rotation one with various size images. Experimental results show that the proposed algorithm is superior to the conventional rotation ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high%20speed%20rotation%20operation" title="high speed rotation operation">high speed rotation operation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20rotation" title=" image rotation"> image rotation</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=transformation%20matrix" title=" transformation matrix"> transformation matrix</a> </p> <a href="https://publications.waset.org/abstracts/25258/high-speed-image-rotation-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25258.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11186</span> Speeding-up Gray-Scale FIC by Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20A.%20Al-Hilo">Eman A. Al-Hilo</a>, <a href="https://publications.waset.org/abstracts/search?q=Hawraa%20H.%20Al-Waelly"> Hawraa H. Al-Waelly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractal%20gray%20level%20image" title="fractal gray level image">fractal gray level image</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20compression%20technique" title=" fractal compression technique"> fractal compression technique</a>, <a href="https://publications.waset.org/abstracts/search?q=iterated%20function%20system" title=" iterated function system"> iterated function system</a>, <a href="https://publications.waset.org/abstracts/search?q=moments%20feature" title=" moments feature"> moments feature</a>, <a href="https://publications.waset.org/abstracts/search?q=zero-mean%20range-domain%20block" title=" zero-mean range-domain block"> zero-mean range-domain block</a> </p> <a href="https://publications.waset.org/abstracts/19903/speeding-up-gray-scale-fic-by-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">492</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11185</span> Red Green Blue Image Encryption Based on Paillier Cryptographic System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mamadou%20I.%20Wade">Mamadou I. Wade</a>, <a href="https://publications.waset.org/abstracts/search?q=Henry%20C.%20Ogworonjo"> Henry C. Ogworonjo</a>, <a href="https://publications.waset.org/abstracts/search?q=Madiha%20Gul"> Madiha Gul</a>, <a href="https://publications.waset.org/abstracts/search?q=Mandoye%20Ndoye"> Mandoye Ndoye</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Chouikha"> Mohamed Chouikha</a>, <a href="https://publications.waset.org/abstracts/search?q=Wayne%20Patterson"> Wayne Patterson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a novel application of the Paillier cryptographic system to the encryption of RGB (Red Green Blue) images. In this method, an RGB image is first separated into its constituent channel images, and the Paillier encryption function is applied to each of the channels pixel intensity values. Next, the encrypted image is combined and compressed if necessary before being transmitted through an unsecured communication channel. The transmitted image is subsequently recovered by a decryption process. We performed a series of security and performance analyses to the recovered images in order to verify their robustness to security attack. The results show that the proposed image encryption scheme produces highly secured encrypted images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=Paillier%20cryptographic%20system" title=" Paillier cryptographic system"> Paillier cryptographic system</a>, <a href="https://publications.waset.org/abstracts/search?q=RBG%20image%20encryption" title=" RBG image encryption"> RBG image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=Paillier" title=" Paillier"> Paillier</a> </p> <a href="https://publications.waset.org/abstracts/79232/red-green-blue-image-encryption-based-on-paillier-cryptographic-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">238</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11184</span> Nation Branding: Guidelines for Identity Development and Image Perception of Thailand Brand in Health and Wellness Tourism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiraporn%20Prommaha">Jiraporn Prommaha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this research is to study the development of Thailand Brand Identity and the perception of its image in order to find any guidelines for the identity development and the image perception of Thailand Brand in Health and Wellness Tourism. The paper is conducted through mixed methods research, both the qualitative and quantitative researches. The qualitative focuses on the in-depth interview of executive administrations from public and private sectors involved scholars and experts in identity and image issue, main 11 people. The quantitative research was done by the questionnaires to collect data from foreign tourists 800; Chinese tourists 400 and UK tourists 400. The technique used for this was the Exploratory Factor Analysis (EFA), this was to determine the relation between the structures of the variables by categorizing the variables into group by applying the Varimax rotation technique. This technique showed recognition the Thailand brand image related to the 2 countries, China and UK. The results found that guidelines for brand identity development and image perception of health and wellness tourism in Thailand; as following (1) Develop communication in order to understanding of the meaning of the word 'Health and beauty tourism' throughout the country, (2) Develop human resources as a national agenda, (3) Develop awareness rising in the conservation and preservation of natural resources of the country, (4) Develop the cooperation of all stakeholders in Health and Wellness Businesses, (5) Develop digital communication throughout the country and (6) Develop safety in Tourism. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brand%20identity" title="brand identity">brand identity</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20perception" title=" image perception"> image perception</a>, <a href="https://publications.waset.org/abstracts/search?q=nation%20branding" title=" nation branding"> nation branding</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20and%20wellness%20tourism" title=" health and wellness tourism"> health and wellness tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=mixed%20methods%20research" title=" mixed methods research"> mixed methods research</a> </p> <a href="https://publications.waset.org/abstracts/79575/nation-branding-guidelines-for-identity-development-and-image-perception-of-thailand-brand-in-health-and-wellness-tourism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79575.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11183</span> High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amal%20Khalifa">Amal Khalifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicolas%20Vana%20Santos"> Nicolas Vana Santos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=steganography" title=" steganography"> steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/170293/high-capacity-image-steganography-using-wavelet-based-fusion-on-deep-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11182</span> Gray Level Image Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Roza%20Afarin">Roza Afarin</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Mozaffari"> Saeed Mozaffari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=correlation%20coefficients" title="correlation coefficients">correlation coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title=" image encryption"> image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20entropy" title=" image entropy"> image entropy</a> </p> <a href="https://publications.waset.org/abstracts/10723/gray-level-image-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10723.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">330</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11181</span> Design and Implementation of an Image Based System to Enhance the Security of ATM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyed%20Nima%20Tayarani%20Bathaie">Seyed Nima Tayarani Bathaie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an image-receiving system was designed and implemented through optimization of object detection algorithms using Haar features. This optimized algorithm served as face and eye detection separately. Then, cascading them led to a clear image of the user. Utilization of this feature brought about higher security by preventing fraud. This attribute results from the fact that services will be given to the user on condition that a clear image of his face has already been captured which would exclude the inappropriate person. In order to expedite processing and eliminating unnecessary ones, the input image was compressed, a motion detection function was included in the program, and detection window size was confined. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20detection%20algorithm" title="face detection algorithm">face detection algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Haar%20features" title=" Haar features"> Haar features</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20of%20ATM" title=" security of ATM"> security of ATM</a> </p> <a href="https://publications.waset.org/abstracts/3011/design-and-implementation-of-an-image-based-system-to-enhance-the-security-of-atm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3011.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11180</span> Deepnic, A Method to Transform Each Variable into Image for Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nguyen%20J.%20M.">Nguyen J. M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucas%20G."> Lucas G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Brunner%20M."> Brunner M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruan%20S."> Ruan S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonioli%20D."> Antonioli D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tabular%20data" title="tabular data">tabular data</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=perfect%20trees" title=" perfect trees"> perfect trees</a>, <a href="https://publications.waset.org/abstracts/search?q=NICS" title=" NICS"> NICS</a> </p> <a href="https://publications.waset.org/abstracts/152479/deepnic-a-method-to-transform-each-variable-into-image-for-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152479.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11179</span> Image Segmentation Using Active Contours Based on Anisotropic Diffusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shafiullah%20Soomro">Shafiullah Soomro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20contours" title="active contours">active contours</a>, <a href="https://publications.waset.org/abstracts/search?q=anisotropic%20diffusion" title=" anisotropic diffusion"> anisotropic diffusion</a>, <a href="https://publications.waset.org/abstracts/search?q=level-set" title=" level-set"> level-set</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20differential%20equations" title=" partial differential equations"> partial differential equations</a> </p> <a href="https://publications.waset.org/abstracts/94886/image-segmentation-using-active-contours-based-on-anisotropic-diffusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94886.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11178</span> New Variational Approach for Contrast Enhancement of Color Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wanhyun%20Cho">Wanhyun Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Seongchae%20Seo"> Seongchae Seo</a>, <a href="https://publications.waset.org/abstracts/search?q=Soonja%20Kang"> Soonja Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20image" title="color image">color image</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20enhancement%20technique" title=" contrast enhancement technique"> contrast enhancement technique</a>, <a href="https://publications.waset.org/abstracts/search?q=variational%20approach" title=" variational approach"> variational approach</a>, <a href="https://publications.waset.org/abstracts/search?q=Euler-Lagrang%20equation" title=" Euler-Lagrang equation"> Euler-Lagrang equation</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20approximation%20method" title=" dynamic approximation method"> dynamic approximation method</a>, <a href="https://publications.waset.org/abstracts/search?q=EME%20measure" title=" EME measure"> EME measure</a> </p> <a href="https://publications.waset.org/abstracts/10574/new-variational-approach-for-contrast-enhancement-of-color-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11177</span> Acoustic Room Impulse Response Computation with Image Sources and Frequency Dependent Boundary Reflection Coefficients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pratik%20Gandhi">Pratik Gandhi</a>, <a href="https://publications.waset.org/abstracts/search?q=Kavitha%20Chandra"> Kavitha Chandra</a>, <a href="https://publications.waset.org/abstracts/search?q=Charles%20Thompson"> Charles Thompson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A computational model of the acoustic room impulse response between transmitters and receivers located in an enclosed cavity under the influence of frequency-dependent reflection coefficients of the walls is presented. The characteristic features of the impulse responses that differentiate these results from frequency-independent reflecting surfaces are discussed. The image-source model is derived from the first principle solution to Green's function of the acoustic wave equation. The post-processing of the computed impulse response with a band-pass filter to better represents the response of a loud-speaker is demonstrated. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20room%20impulse%20response" title="acoustic room impulse response">acoustic room impulse response</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20dependent%20reflection%20coefficients" title=" frequency dependent reflection coefficients"> frequency dependent reflection coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=Green%27s%20function" title=" Green&#039;s function"> Green&#039;s function</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20model" title=" image model"> image model</a> </p> <a href="https://publications.waset.org/abstracts/152987/acoustic-room-impulse-response-computation-with-image-sources-and-frequency-dependent-boundary-reflection-coefficients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152987.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11176</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11175</span> The &#039;Human Medium&#039; in Communicating the National Image: A Case Study of Chinese Middle-Class Tourists Visiting Japan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abigail%20Qian%20Zhou">Abigail Qian Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, the prosperity of mass tourism in China has accelerated the breadth and depth of direct communication between countries, and the national image has been placed in a new communication context. Outbound tourists are not only directly involved in the formation of the national image, but are also the most direct medium and the most active symbol representing the national image. This study uses Chinese middle-class tourists visiting Japan as a case study, and analyzes, through participant observation and semi-structured interviews, the communication function of the national image transmitted by 'human medium' in tourism activities. It also explores the 'human medium' in the era of mass tourism. This study hopes to build a bridge for tourism research and national image and media studies. It will provide a theoretical basis and practical guidance for promoting the national image, strengthening exchanges between tourists and local populations, and expanding the tourism market in the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20medium" title="human medium">human medium</a>, <a href="https://publications.waset.org/abstracts/search?q=national%20image" title=" national image"> national image</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=Chinese%20middle%20class" title=" Chinese middle class"> Chinese middle class</a>, <a href="https://publications.waset.org/abstracts/search?q=outbound%20tourists" title=" outbound tourists"> outbound tourists</a> </p> <a href="https://publications.waset.org/abstracts/117390/the-human-medium-in-communicating-the-national-image-a-case-study-of-chinese-middle-class-tourists-visiting-japan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/117390.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11174</span> Image Compression Using Block Power Method for SVD Decomposition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Asnaoui%20Khalid">El Asnaoui Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Chawki%20Youness"> Chawki Youness</a>, <a href="https://publications.waset.org/abstracts/search?q=Aksasse%20Brahim"> Aksasse Brahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ouanan%20Mohammed"> Ouanan Mohammed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20compression" title="image compression">image compression</a>, <a href="https://publications.waset.org/abstracts/search?q=SVD" title=" SVD"> SVD</a>, <a href="https://publications.waset.org/abstracts/search?q=block%20SVD%20power%20method" title=" block SVD power method"> block SVD power method</a>, <a href="https://publications.waset.org/abstracts/search?q=lossless%20compression" title=" lossless compression"> lossless compression</a>, <a href="https://publications.waset.org/abstracts/search?q=near%20lossless" title=" near lossless"> near lossless</a> </p> <a href="https://publications.waset.org/abstracts/34041/image-compression-using-block-power-method-for-svd-decomposition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">387</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11173</span> A Method of the Semantic on Image Auto-Annotation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Huo">Lin Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianwei%20Liu"> Xianwei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingxiong%20Zhou"> Jingxiong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20auto-annotation" title="image auto-annotation">image auto-annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20correlograms" title=" color correlograms"> color correlograms</a>, <a href="https://publications.waset.org/abstracts/search?q=Hash%20code" title=" Hash code"> Hash code</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/15628/a-method-of-the-semantic-on-image-auto-annotation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11172</span> Investigation of the Speckle Pattern Effect for Displacement Assessments by Digital Image Correlation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Salim%20%C3%87al%C4%B1%C5%9Fkan">Salim Çalışkan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hakan%20Aky%C3%BCz"> Hakan Akyüz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital image correlation has been accustomed as a versatile and efficient method for measuring displacements on the article surfaces by comparing reference subsets in undeformed images with the define target subset in the distorted image. The theoretical model points out that the accuracy of the digital image correlation displacement data can be exactly anticipated based on the divergence of the image noise and the sum of the squares of the subset intensity gradients. The digital image correlation procedure locates each subset of the original image in the distorted image. The software then determines the displacement values of the centers of the subassemblies, providing the complete displacement measures. In this paper, the effect of the speckle distribution and its effect on displacements measured out plane displacement data as a function of the size of the subset was investigated. Nine groups of speckle patterns were used in this study: samples are sprayed randomly by pre-manufactured patterns of three different hole diameters, each with three coverage ratios, on a computer numerical control punch press. The resulting displacement values, referenced at the center of the subset, are evaluated based on the average of the displacements of the pixel’s interior the subset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20correlation" title="digital image correlation">digital image correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=speckle%20pattern" title=" speckle pattern"> speckle pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=experimental%20mechanics" title=" experimental mechanics"> experimental mechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=tensile%20test" title=" tensile test"> tensile test</a>, <a href="https://publications.waset.org/abstracts/search?q=aluminum%20alloy" title=" aluminum alloy"> aluminum alloy</a> </p> <a href="https://publications.waset.org/abstracts/171900/investigation-of-the-speckle-pattern-effect-for-displacement-assessments-by-digital-image-correlation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171900.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11171</span> A Technique for Image Segmentation Using K-Means Clustering Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadia%20Basar">Sadia Basar</a>, <a href="https://publications.waset.org/abstracts/search?q=Naila%20Habib"> Naila Habib</a>, <a href="https://publications.waset.org/abstracts/search?q=Awais%20Adnan"> Awais Adnan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents the Technique for Image Segmentation Using K-Means Clustering Classification. The presented algorithms were specific, however, missed the neighboring information and required high-speed computerized machines to run the segmentation algorithms. Clustering is the process of partitioning a group of data points into a small number of clusters. The proposed method is content-aware and feature extraction method which is able to run on low-end computerized machines, simple algorithm, required low-quality streaming, efficient and used for security purpose. It has the capability to highlight the boundary and the object. At first, the user enters the data in the representation of the input. Then in the next step, the digital image is converted into groups clusters. Clusters are divided into many regions. The same categories with same features of clusters are assembled within a group and different clusters are placed in other groups. Finally, the clusters are combined with respect to similar features and then represented in the form of segments. The clustered image depicts the clear representation of the digital image in order to highlight the regions and boundaries of the image. At last, the final image is presented in the form of segments. All colors of the image are separated in clusters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=K-means%20function" title=" K-means function"> K-means function</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20and%20global%20minimum" title=" local and global minimum"> local and global minimum</a>, <a href="https://publications.waset.org/abstracts/search?q=region" title=" region"> region</a> </p> <a href="https://publications.waset.org/abstracts/25635/a-technique-for-image-segmentation-using-k-means-clustering-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">376</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11170</span> Blind Super-Resolution Reconstruction Based on PSF Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Osama%20A.%20Omer">Osama A. Omer</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20Hamed"> Amal Hamed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind" title="blind">blind</a>, <a href="https://publications.waset.org/abstracts/search?q=PSF" title=" PSF"> PSF</a>, <a href="https://publications.waset.org/abstracts/search?q=super-resolution" title=" super-resolution"> super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=knife-edge" title=" knife-edge"> knife-edge</a>, <a href="https://publications.waset.org/abstracts/search?q=blurring" title=" blurring"> blurring</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral" title=" bilateral"> bilateral</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20norm" title=" L1 norm"> L1 norm</a> </p> <a href="https://publications.waset.org/abstracts/1385/blind-super-resolution-reconstruction-based-on-psf-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1385.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11169</span> Improvement of Bone Scintography Image Using Image Texture Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Eltayeb%20Wagallah"> Eltayeb Wagallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image enhancement allows the observer to see details in images that may not be immediately observable in the original image. Image enhancement is the transformation or mapping of one image to another. The enhancement of certain features in images is accompanied by undesirable effects. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian scale mixture model and median filter were presented, which accomplishes nonlinearities from scattering a new nonlinear approach for contrast enhancement of bones in bone scan images using both gamma correction and negative transform methods. The usual assumption of a distribution of gamma and Poisson statistics only lead to overestimation of the noise variance in regions of low intensity but to underestimation in regions of high intensity and therefore to non-optional results. The contrast enhancement results were obtained and evaluated using MatLab program in nuclear medicine images of the bones. The optimal number of bins, in particular the number of gray-levels, is chosen automatically using entropy and average distance between the histogram of the original gray-level distribution and the contrast enhancement function’s curve. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bone%20scan" title="bone scan">bone scan</a>, <a href="https://publications.waset.org/abstracts/search?q=nuclear%20medicine" title=" nuclear medicine"> nuclear medicine</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing%20technique" title=" image processing technique"> image processing technique</a> </p> <a href="https://publications.waset.org/abstracts/13956/improvement-of-bone-scintography-image-using-image-texture-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13956.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11168</span> Deployment of Matrix Transpose in Digital Image Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Okike%20Benjamin">Okike Benjamin</a>, <a href="https://publications.waset.org/abstracts/search?q=Garba%20E%20J.%20D."> Garba E J. D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=matrices" title=" matrices"> matrices</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel" title=" pixel"> pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20transpose" title=" matrix transpose "> matrix transpose </a> </p> <a href="https://publications.waset.org/abstracts/48717/deployment-of-matrix-transpose-in-digital-image-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11167</span> Media Representation of China: A Content Analysis of Coverage of China-Related Energy in the New York Times</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lian%20Liu">Lian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> By analyzing the content of the New York Times' China-related energy reports, this study aims to explore the construction of China's national image by the mainstream media in the United States. The study analyzes three aspects of the coverage: topics, reporting tendencies, and countries involved. The results of the study show that economic issues are the main focus of the New York Times’ China-related energy coverage, followed by political issues and environmental issues. Overall, the coverage tendency was mainly negative, but positive coverage was dominated by science and technology issues. In addition, the study found that U.S.-China relations and Sino-Russian relations were important contexts for the construction of China's national image in the NYT's China-related energy coverage. These stories highlight China's interstate interactions with the United States, Japan, and Russia, which serve as important links in the coverage. The findings of this study reveal some characteristics and trends of the U.S. mainstream media's country image of China, which are important for a deeper understanding of the U.S.-China relationship and the media's influence on the construction of the country's image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=media%20coverage" title="media coverage">media coverage</a>, <a href="https://publications.waset.org/abstracts/search?q=China" title=" China"> China</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20analysis" title=" content analysis"> content analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization%20technology" title=" visualization technology"> visualization technology</a> </p> <a href="https://publications.waset.org/abstracts/172721/media-representation-of-china-a-content-analysis-of-coverage-of-china-related-energy-in-the-new-york-times" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172721.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">86</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11166</span> X-Corner Detection for Camera Calibration Using Saddle Points</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdulrahman%20S.%20Alturki">Abdulrahman S. Alturki</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20S.%20Loomis"> John S. Loomis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title="camera calibration">camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=corner%20detector" title=" corner detector"> corner detector</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detector" title=" edge detector"> edge detector</a>, <a href="https://publications.waset.org/abstracts/search?q=saddle%20points" title=" saddle points"> saddle points</a> </p> <a href="https://publications.waset.org/abstracts/40538/x-corner-detection-for-camera-calibration-using-saddle-points" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40538.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11165</span> Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20W.%20U.%20D.%20Chathurani">N. W. U. D. Chathurani</a>, <a href="https://publications.waset.org/abstracts/search?q=Shlomo%20Geva"> Shlomo Geva</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinod%20Chandran"> Vinod Chandran</a>, <a href="https://publications.waset.org/abstracts/search?q=Proboda%20Rajapaksha"> Proboda Rajapaksha </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features&#39; dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title="feature fusion">feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=membership%20function" title=" membership function"> membership function</a>, <a href="https://publications.waset.org/abstracts/search?q=normalization" title=" normalization"> normalization</a> </p> <a href="https://publications.waset.org/abstracts/52968/image-retrieval-based-on-multi-feature-fusion-for-heterogeneous-image-databases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">345</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=373">373</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=374">374</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10